Sometimes Two Processes are Better than One

This is a short post to save people the time of discovering that what is usually thought to be more efficient is not in one case…

I’m working on some ETL integrations where the source is sometimes so complicated that it is too painful to work with it within the Informatica Cloud Mapping Designer. Fortunately, the customer also has Informatica Cloud Real Time, which has some handy APIs for accessing and re-arranging data from ReST and SOAP services. In one particular case I need to get check each record before sending the result set to a Mapping Task. I followed an example where one process calls a sub-process that is designated to apply to only the object type of the record being processed (a simplified version is depicted below).

Top Process calling Sub-Process for a List
Top Process calling Sub-Process for a List
Sub Process that Writes a Particular Object Type to a file.
Sub Process that Writes a Particular Object Type to a file.

This worked as described, though looking at the resulting process, it seemed that I could eliminate the sub-process by recursing the file writer call inside the first process.

One Recursive Process Writing a List to a File
One Recursive Process Writing a List to a File

The recursion approach worked, but it was much, much slower than the approach of handing off the file writing task to the sub-process. This seemed unexpected given the minimal processing being done, but there you have it.


How to Encrypt and Transmit Files in Informatica Cloud with a Single Mapping Object

Recently I had the need to encrypt a file before sending it via FTP. A colleague of mine (JayJay Zheng) had discovered that a Mapping object could perform an FTP by configuring the Source transformation with a File Processor connection and then entering the FTP connection details as Query Options on the connection as shown below:

SFTP with a Mapping Object
SFTP with a Mapping Object

When it came time to encrypt the file prior to transmission, I found KB 476543, HOW TO: Encrypt the data in Data Synchronization task within Informatica Cloud, which did the trick. But in a later design review session, the same colleague pointed out that the number of objects to be maintained could be reduced by using the same Mapping configuration approach as done for FTP with the encryption and running them in sequence within the mapping:

Encrypt and FTP in one Mapping with Piping
Encrypt and FTP in one Mapping with Piping

How Not to Miss Your New Informatica Cloud Connection

Today I was faced with a requirement clarification where the target of a File Connector needed to be on a network drive rather than on the SecureAgent server. This is pretty straightforward on Linux where you simply mount the network drive and point to the mount point, but this was a Windows installation which I have not worked with much lately.

To be cautious, I made copies of my working objects to test the change before updating something that I knew worked, specifically the Mapping and Flat File Connection. The change failed, with the cryptic error message “Invalid mapping import and no import log generated.”

I will spare you the various things I tried that caused me to thump my head on my desk repeatedly.

Turns out, when you change the Connection for the same type of Connection, even though the Object name remains unchanged on the screen, if you click the Select button next to the Object name you will find that the Target Object has been reset to the default, which is an existing file. Perfectly fine for updating an existing file, but not so great when your goal is to generate a new file each time. The fix is to set it back afterwards. Screens below for clarification:

Before Connection Change
Before Connection Change
After Connection Change
After Connection Change

One More Solution to “Integrated Weblogic domain was not built successfully” on Windows

If you search for this issue you will find all sorts of fixes that actually work. To summarize the two key ones:

  1. Exit JDeveloper, delete [JDEV_USER_HOME]\system11.\DefaultDomain
  2. Make sure the path to [JDEV_USER_HOME] contains no spaces or dots

Another, newer “common” issue is solved by:

I’m running Windows 7, so the Jython fix was not for me, and the first two I have used successfully in the past, but they failed me today. Before giving up entirely and going to a pre-build VM fall-back position, I opened up [MIDDLEWARE_HOME]\Middleware11119\wlserver_10.3\common\bin\commEnv.cmd and found something funny (not “ha ha” funny but “&#%!@?!” funny). Instead of the JAVA_HOME path I provided during the JDeveloper Studio installation that is also reflected in the Help > About > Properties it has the path from System properties placed there by an Oracle 12c installation.

So I changed the JAVA_HOME in commEnv.cmd, deleted [JDEV_USER_HOME]\system11.\DefaultDomain one more time and was off and running.


Establishing Address Doctor Web Service Account Password

As part of the Informatica Cloud Master Certification process, there is a series of graded labs to be performed using Address Doctor web services. The web services are very nicely documented, though I struggled for some time about why I was not able to get the initial authentication working. I made the incorrect assumption that the password used to create the dashboard login account is the same as that used for making service calls. It is not.

Perhaps it is only with the free account used for training, but the web service account password was never sent to me. Once I figured out that it was a different password, I then needed to figure out how to change it, which I will share here.

Step 1: Locate your account ID


Step 2: Log out of the Data Quality Center and click the link to Login Using Account ID


Step 3: Click Forgot My Password


Step 4: Provide the Account ID and your Email Address


Step 6: Use the password that is emailed to you to log in and run Web Services

You can change this provide password if you like from the screen where your obtained your account ID.


A Simplified Oracle 11g Database Command Line Linux Basic Install

I have always struggled with setting up a fresh Oracle 11g Database install. I have made copious notes over the years on the steps to take each time I tackle the process, and when I go back to the notes they only serves as clues to the mystery. This time I had to do the whole thing on a Linux both without the use of the GUI installers and I have captured and scripted the process once and for all.

Of course, most new installs these days will be 12c and should be performed by someone with deep database skills. This article is intended for those that simply need a database for a proof-of-concept or development environment where 11g will suffice.


Before I get into the details, I want to point out that there are three key blog posts that I found most useful in this process. They are:


These steps are based on using the scripts in this article. Feel free to adjust to fit your own needs, which is how this process came about.

As a side note, while putting the process together and again while documenting it, I used a VM and took a snapshot at the completion of each step to save time if some issue arose in a particular step. You will probably benefit from this as well, since small differences can prevent fully using these steps exactly as described.


The Database installs can be found at If you have access to MOS, you can get a slightly newer version, and you will only need to first two files.

Environment Variables

The values need to be set in the environment. You can place them in a bash script under /etc/profile.d for all users, or in the oracle user home directory if necessary. In the /etc/profile.d folder, be sure to chmod 644 to ensure it will run properly.

Regardless of where you place these values, be sure they are set before continuing and placed where they will always be set when administering to the database in the future. This may require either sourcing the file, or rebooting.

Script Variables will set all variables except for the name of the install zip files as those can change over time. Update as necessary in

Install Prep Script

The script below performs the following tasks:

Creates User and Groups

The _create_groups function takes an inelegant approach to create the oracle user and groups. For those of you better versed in Linux administration, feel free to create your own script. Please share if you do.

Creates ORACLE_HOME path and Update SELinux

For reasons I can’t fathom, the installer will not create the ORACLE_HOME path, so it must be created before installing. Once installed, SELinux enforcement can cause problems with getting the listeners running. Since I only use the database behind a firewall, turning it off works best for me. I would not take that approach with a production server, though.

Dependency Installs

As noted in the Acknowledgements, there is a useful script for making sure the dependencies are in place.

Set Response File Variables

If you use the response files provided in the Downloads section, this will update the values to match your Environment Variables. If you are using your own or from a different source, comment out the _create_rsp_files function.

Install the Database

As noted earlier, be sure that the install zip file names are correct in will perform the following:

  1. Unzip the install files
  2. Perform a silent install of the database application
  3. Create a default listener
  4. Create a default database

Post Install Steps

After the installation, there are two scripts that are provided by Oracle that need to be run as root. Then there are some steps to perform so that the database is started every time the server is, because I have not had a case where this wasn’t what I needed. As noted in the Acknowledgements, How I Enable Autostarting of Oracle Database for Demonstrations and Development (by Christopher Jones) was my reference for this process and taken almost completely from that blog entry.

Once the above is completed, the install files are no longer necessary, so they are removed.

Verify the Installation

After running, you will either need to log out and log back in again as oracle, or simply reboot the server. First test with lsnrctl status, which should yield the following:


On occasion, this may not work properly. Before sending me an email reminding me of my imperfections, you may first want to try:

And then either log out and back in or (preferably) reboot the server and try again. This generally fixed it for me while coming up with these steps.


The scripts described here are available at

One More Thing

I found installing sqldeveloper to be a great way to do more through testing. At the time of this writing it can be found at


Revisiting the Question of Build versus Buy for Web Portal Solutions

The general wisdom stated in the architectural principles of many an enterprise is “Buy before build”.  This often makes sense since the cost of a COTS license will be lower than the labor expense required to develop the same functionality in-house. There is also the peace of mind that a reputable vendor will own the maintenance of the system core.

Portals may be challenging this general wisdom.

View from the Outside

Putting aside the myriad of capabilities packaged into the common portal product (though we will come back to them later), the essence of a portal when the business looks at the bottom-line is the presentation layer of a web site. The technologies used in the presentation layer of web applications has been evolving the past several years at pace that is much faster than the average portal product release cycle. When the new versions are released, there is often signification re-work that must be done to upgrade or migrate. Because product vendors need to provide solid, dependable software, some of the included technology may already be out of date by the time it is placed in production.

To be clear, this is not to disparage portal product vendors. Anyone that has ever maintained a portal application much prefers having a thoroughly tested platform backed by 24/7 support when something goes wrong. There are trade-offs required to enjoy those benefits, though.

If being able to quickly put new UI approaches and technologies into production is a key business value in your enterprise, you will need to build rather than buy at least part of your portal platform.

Under the Hood

To compare the value of build vs buy vs customize it is useful to consider what buying gets you. What you get for your license dollar will vary greatly from vendor to vendor and even within a single vendor if they offer many options. To keep this from becoming book-length, let’s stick to the most common features available for the most common purposes and apply your own due-diligence to modify this list when examining your own platform selection.

Because terminology can vary greatly from one product to another, we will define these key, common features for the purpose of comparison.



Authentication Verify identity of the user
Authorization What the user is allowed to see or do
Personalization Behavior based on information about the user
Context-Based Navigation Site navigation driven by Authentication, Authorization and Personalization
Page Composition The arrangement of components on a page, sometimes influenced by Authentication, Authorization and Personalization
Content Integration Inclusion of managed content, sometimes influenced by Authentication, Authorization and Personalization
Release Promotion Moving features and functions from Development to Staging to Production

Working with the assumption that these are out of the box features, our comparison needs to consider if we need the feature and what it takes to create it ourselves.


Every application server has some form of authentication mechanism, and all of the better application frameworks leverage the underlying server standards. While not the least important feature, it is the simplest to implement without a framework. In some cases, it is actually easier to do so.


If you are authenticating against and LDAP, most application servers will have easy to use hooks into roles. Popular J2EE application servers have authorization APIs that are standards-based. Initially this may be a little more effort than Authorization, but if you document your approach and publish it internally where it is easily accessed, this should not be a major hurdle.


Many development teams struggle with personalization even with a COTS framework where it is a prominent feature. In some cases where business requirements, developer training, product APIs and enterprise data architecture are misaligned, a custom approach may actually be easier. On the opposite end of the spectrum, when all of those factors are in alignment a vendor-provided solution is a major time saver.

Context-Based Navigation

This is often the most appreciated feature of a portal product. While at the UI layer the result is “show or not”, portal frameworks in concert with IDE and/or administration UI provide a rich set of features for coming to that simple Boolean result. Then again, products need to support a very broad set of circumstances in order to satisfy the most customers. You only need to implement those features that are part of your business requirements. In some cases, that will be a very simple implementation. In others you will learn why portal products are so popular. The key is to get very clear on your requirements and then design your solution to be flexible and maintainable.

Page Composition

This is a feature that all portals provide and would be very difficult to build from scratch with the same feature sets that vendor provide. However, very few organizations use all of those features. The complexity is around satisfying the requirements of all companies rather than just yours. If there is no need for runtime updates to page configurations there is no need for this feature. If the feature need is there, tightly managing the requirements and having technical and business stake holders working closely to review cost versus value will allow you to determine what is the best approach for your implementation.

Content Integration

Content management and portals have had a strange relationship dating back before there were any common standards for either. Some portals have content management features built in and some content management systems provide portal application features. Some have standards-based integration points and others simply recommend processes that allow the content to be re-used.

Release Promotion

What sets each vendor apart is how they implement the features. Each product has its own way of maintaining configuration and this results in either a specific tool or process to move that configuration from development to staging and production (plus any interim steps your enterprise happens to use). For solutions built in-house, you will need to define these processes or create these tools to provide the minimal disruption to services while making updates. If a vendor-provided product is sufficiently customized, an enterprise-specific approach may be needed, anyway.


So we see that we can create all of the functionality a portal provides without a portal product. The purpose of a framework is to provide commonly desirable features that are already built and integrated with each other, which is a key value portal products provide.

If all you need are these features, then standard wisdom of buy before build holds true. It still holds true if you can customize a standard offering in a way that is maintainable for you and supported by the vendor.

However, if you want something that is difficult to customize a vendor-based solution to do or cannot be supported by the vendor (support-pricing is based on the product working as designed), then you need to weigh the value of that something[1] against building and maintaining it yourself. You may find that cost of ownership for the latest trend does not have an ROI to support following it and still go the out-of-the-box route.

Or you may go your own way and be the subject of the next web site trend-setter spotlight report.

[1] Here we are thinking about modern UI features but the analysis holds true for anything you need that of sufficient value


Tailing in Windows with a Right-Click

Being late to Linux in my career, I’m fascinated by things many probably already find mundane, like tailing logs. Someone finally suggested to me that I could use Cygwin to do this in Windows. He was my hero for the day!

Speaking of heroes, the members of Stackoverflow have a great way to open a Cygwin console in any directory with a right-click, described at I used the registry trick because the command-line approach didn’t work for me.


Tar That There Here

Authorization in Linux can be very fine-grained, a feature that admins take advantage of to keep the non-admins from making a mess of things. This is generally a good thing, though it can occasionally be frustrating. One such occasion is when newbies need to tar up a folder they have permissions on but do not have permissions to create files in that folder’s parent. For example, as a developer role on a machine I have ownership of myapp that is inside apps, which is owned by root. This would look something like:

The tar command creates a tar file from where it is run. If I wanted to create myapp.tar.gz I would normally run tar czf myapp.tar.gz myapp from inside the /apps path. But with no create permissions in that folder, I just get a snarky response from Linux.

Skipping the details of head banging on desk and key banging on Google, I found the following approach that does the trick.

From a path you have write permissions to (almost always your home directory if no where else), run
tar -czf [TARFILENAME].tar.gz -C [PARENT_DIR]/ [DIR_TO_TAR]
For example:
tar czf folder.tar.gz -C /var/www/ folder

The ‘-C’ tells tar to start from the path following rather than where you are at. So the following steps:

creates myapp.tar.gz in my home directory.

Problem solved. Of course, I only made this tar file because there were problems with the app, so there is still the problem of debugging, but I think we can both do that already.