Search Results

Search found 300 results on 12 pages for 'christopher berman'.

Page 1/12 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Neuberger Berman Defines CRM Strategy In Asset Management

    - by michael.seback
    Neuberger Berman Defines Front Office Strategy for the New Firm Neuberger Berman is a majority employee-owned independent asset management firm with a heritage dating back to 1939. It provides a range of investment options, wealth planning services, and advice to meet individual needs. It also offers a broad range of financial capabilities and specializes in developing innovative and customized investment solutions for institutions. ... "The Insight team's analysis was critical to helping us assess the strengths and weaknesses of our Siebel implementation. It helped us to come up with our strategic plan for using customer relationship management and business intelligence capabilities." - Roxana Feldmann, Senior Vice President Technology ...Read more.

    Read the article

  • Principles of Big Data By Jules J Berman, O’Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • Error Installing ruby with RVM Single User mode on Arch Linux

    - by ChrisBurnor
    I've just installed RVM on ArchLinux x64 in single user mode via the recommended install script curl -L https://get.rvm.io | bash -s stable I've also installed all the requirements listed in rvm requirements However, I'm having trouble actually installing any version of ruby. And getting the following error: arch:~ % rvm install 1.9.3 No binary rubies available for: ///ruby-1.9.3-p194. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Fetching yaml-0.1.4.tar.gz to /home/christopher/.rvm/archives % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 460k 100 460k 0 0 702k 0 --:--:-- --:--:-- --:--:-- 767k Extracting yaml-0.1.4.tar.gz to /home/christopher/.rvm/src Prepare yaml in /home/christopher/.rvm/src/yaml-0.1.4. Configuring yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running ' ./configure --prefix=/home/christopher/.rvm/usr ', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/configure.log Compiling yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Installing Ruby from source to: /home/christopher/.rvm/rubies/ruby-1.9.3-p194, this may take a while depending on your cpu(s)... ruby-1.9.3-p194 - #downloading ruby-1.9.3-p194, this may take a while depending on your connection... ruby-1.9.3-p194 - #extracting ruby-1.9.3-p194 to /home/christopher/.rvm/src/ruby-1.9.3-p194 ruby-1.9.3-p194 - #extracted to /home/christopher/.rvm/src/ruby-1.9.3-p194 Skipping configure step, 'configure' does not exist, did autoreconf not run successfully? ruby-1.9.3-p194 - #compiling Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/make.log There has been an error while running make. Halting the installation. The log files are as follows: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/configure.log __rvm_log_command:32: permission denied: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/make.log make: *** No targets specified and no makefile found. Stop. arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/make.log make: *** No targets specified and no makefile found. Stop.

    Read the article

  • How to install the latest version of google earth on ubuntu 12.10 64-bit?

    - by user114769
    Chris here with a huge problem with the Google earth latest version. Gosh.. whenever I try to lunch the app this comes out. I'v Tried this web site nothing worked: (THANK YOU SO MUCH FOR EVEN READING THIS. MAY YOU GUYS HAVE A NICE DAY. christopher@christopher-E4300:~$ google-earth Google Earth has caught signal 11. We apologize for the inconvenience, but Google Earth has crashed. This is a bug in the program, and should never happen under normal circumstances. A bug report and debugging data have been written to this text file: /home/christopher/.googleearth/crashlogs/crashlog-50cbd67e.txt Please include this file if you submit a bug report to Google. https://help.ubuntu.com/community/GoogleEarth#Installing_the_.deb_file_downloaded_from_the_Google_Earth_Website Here is the content of /home/christopher/.googleearth/crashlogs/crashlog-50cbd67e.txt Major Version 7 Minor Version 0 Build Number 0001 Build Date Oct 29 2012 Build Time 19:13:39 OS Type 3 OS Major Version 3 OS Minor Version 5 OS Build Version 0 OS Patch Version 0 Crash Signal 11 Crash Time 1355535998 Up Time 0.789556 Stacktrace from glibc: ./libgoogleearth_free.so(+0x1e9cfb)[0xf757dcfb] ./libgoogleearth_free.so(+0x1e9f43)[0xf757df43] [0xf7726400]

    Read the article

  • Issues running commands

    - by Joel
    Every time I run a command I get this back. E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? christopher@christopher:~$ This didn't start happening until I changed my device name.

    Read the article

  • Enforcing Constraints Upon Data Documents of Various Formats

    - by Christopher Berman
    This seems like the sort of problem that must have been solved elegantly long ago, but I haven't the foggiest how to google it and find it. Suppose you're maintaining a large legacy system, which has a large collection of data (tens of GB) of various formats, including XML and two different internal configuration formats. Suppose further that there are abstract rules governing the values these files may or may not contain. EXAMPLE: File A defines the raw, mathematical data pertaining to the aerodynamics of a car for consumption of the physics component of the system. File B contains certain values from File A in an easily accessible, XML hierarchy for consumption of a different component of the system. There exists, therefore, an abstract rule (or constraint) such that the values from File B must match the values from File A. This is probably the simplest constraint that can be specified, but in practice, the constraints between files can become very complicated indeed. What is the best method for managing these constraints between files of arbitrary formats, short of migrating it over to an RDBMS (which simply isn't feasible for the foreseeable future)? Has this problem been solved already? To be more specific, I would expect the solution to at least produce notifications of violated constraints; the solution need not resolve the constraints. ============================== Sample file structures File A (JeepWrangler2011.emv): MODEL JeepWrangler2011 { EsotericMathValueX 11.1 EsotericMathValueY 22.2 EsotericMathValueZ 33.3 } File B (JeepWrangler2011.xml): <model name="JeepWrangler2011"> <!--These values must correspond File A's EsotericMathValues--> <modelExtent x="11.1" y="22.2" z="33.3"/> [...] </model>

    Read the article

  • Why is this PHP loop rendering every row twice?

    - by Christopher
    I'm working on a real frankensite here not of my own design. There's a rudimentary CMS and one of the pages shows customer records from a MySQL DB. For some reason, it has no probs picking up the data from the DB - there's no duplicate records - but it renders each row twice. The page PHP is viewable at http://christopher.pastebin.com/DQkjjG3s (attempted to include in this post but it was horribly mangled, think it's important to have it all in context). I'm not the world's best PHP expert but I think I can see an error in a for loop when there is one... But everything looks ok to me. You'll notice that the customer name is clickable; clicking takes you to another page where you can view their full info as held in the DB - and for both rows, the customer ID is identical, and manually checking the DB shows there's no duplicate entries. The code is definitely rendering each row twice, but for what reason I have no idea. All pointers / advice appreciated.

    Read the article

  • How to update the text of a tag in XML using Elementree

    - by Christopher
    Using elementree, the easiest way to read the text of a tag is to do the following: import elementtree.ElementTree as ET sKeyMap = ET.parse("KeyMaps/KeyMap_Checklist.xml") host = sKeyMap.findtext("/BrowserInformation/BrowserSetup/host") Now I want to update the text in the same file, hopefully without having to re-write it with something easy like: host = "4444" sKeyMap.replacetext("/BrowserInformation/BrowserSetup/host") Any ideas? Thx in advance Christopher

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Wix, Launch application after installation complete, with UAC turned on.

    - by Christopher Roy
    Good day. I've been building an installer for our product using the WIX(Windows Installer XML) technology. The expected behavior is that the product is launched, if the check box is checked after installation. This has been working for some time now, but we found out recently that UAC of Win 7, and Vista is stopping the application from launching. I've done some research and it has been suggested to me that I should add the attributes Execute='deferred' and Impersonate='no'. Which I did, but then found out that to execute deferred, the CustomAction has to be performed, between the InstallInitialize, and IntallFinalize phases; which is not what I need. I need the product to launch AFTER install finalize, IF the launch checkbox is checked. Is there any other way to elevate permissions? Any and all answers, suggestions, or resonings will be appreciated. Cheers, Christopher Roy.

    Read the article

  • Win32 scrolling examples

    - by Christopher
    Could anyone point me to (or provide?) some nice, clear examples of how to implement scrolling in Win32? Google brings up a lot of stuff, obviously, but most examples seem either too simple or too complicated for me to be sure that they demonstrate the right way of doing things. I use LispWorks CAPI (cross-platform Common Lisp GUI lib) in my current project, and on Windows I have a hard-to-figure-out bug relating to scrolling; basically I want to do some tests directly via the Win32 API to see if I can shed some light on the situation. Many thanks, Christopher

    Read the article

  • Access <body> tag properties from form found in mediaboxAdvanced lightbox.

    - by Christopher Richa
    I am building my portfolio website simply using HTML, Flash, and the Mootools Javascript framework. I have decided to implement a theme changing feature using Javascript and cookies. When the user clicks on the "Change Theme" link, a mediaboxAdvanced lightbox appears, containing a real-time form which allows you to change the theme on the portfolio. Here's the code for the form: <div id="mb_inline" style="display: none; margin:15px;"> <h3>Change Your Theme</h3> <form> <fieldset> <select id="background_color" name="background_color"> <option value="#dcf589">Original</option> <option value="#000FFF">Blue</option> <option value="#00FF00">Green</option> </select> </fieldset> </form> </div> I know, there is no submit button, but as soon as something is changed in that form, the following Mootools code happens: var themeChanger = new Class( { pageBody : null, themeOptions : null, initialize: function() { this.pageBody = $(document.body); this.themeOptions = $('background_color'); this.themeOptions.addEvent('change', this.change_theme.bindWithEvent(this)); }, change_theme: function(event) { alert("Hello"); } }); window.addEvent('domready', function() { var themeChanger_class = new themeChanger(); }); Now this is only a test function, which should be triggered when the dropdown menu changes. However, it seems that none of it works when the form is in the lightbox! Now if I decide to run the form outside of the lightbox, then it works great! Am I missing something? If you need more examples, I will fill in on demand. Thank you all in advance. -Christopher

    Read the article

  • Using PHP OCI8 with 32-bit PHP on Windows 64-bit

    - by christopher.jones
    The world migration from 32-bit to 64-bit operating systems is gaining pace. However I've seen a couple of customers having difficulty with the PHP OCI8 extension and Oracle DB on Windows 64-bit platforms. The errors vary depending how PHP is run. They may appear in the Apache or PHP log: Unable to load dynamic library 'C:\Program Files (x86)\PHP\ext\php_oci8_11g.dll' - %1 is not a valid Win32 application. or Warning oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that PATH includes the directory with Oracle Instant Client libraries Other than IIS permission issues a common cause seems to be trying to use PHP with libraries from an Oracle 64-bit database on the same machine. There is currently no 64-bit version of PHP on http://php.net/ so there is a library mismatch. A solution is to install Oracle Instant Client 32-bit and make sure that PHP uses these libraries, while not interferring with the 64-bit database on the same machine. Warning: The following hacky steps come untested from a Linux user: Unzip Oracle Instant Client 32-bit and move it to C:\WINDOWS\SYSWOW64\INSTANTCLIENT_11_2. You may need to do this in a console with elevated permissions. Edit your PATH environment variable and insert C:\WINDOWS\SYSTEM32\INSTANTCLIENT_11_2 in the directory list before the entry for the Oracle Home library. Windows makes it so all 32-bit applications that reference C:\WINDOWS\SYSTEM32 actually see the contents of the C:\WINDOWS\SYSWOW64 directory. Your 64-bit database won't find an Instant Client in the real, physical C:\WINDOWS\SYSTEM32 directory and will continue to use the database libraries. Some of our Windows team are concerned about this hack and prefer a more "correct" solution that (i) doesn't require changing the Windows system directory (ii) doesn't add to the "memory" burden about what was configured on the system (iii) works when there are multiple database versions installed. The solution is to write a script which will set the 64-bit (or 32-bit) Oracle libraries in the path as needed before invoking the relevant bit-ness application. This does have a weakness when the application is started as a service. As a footnote: If you don't have a local database and simply need to have 32-bit and 64-bit Instant Client accessible at the same time, try the "symbolic" link approach covered in the hack in this OTN forum thread. Reminder warning: This blog post came untested from a Linux user.

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • Learn to use PHP and Python with Oracle Database

    - by christopher.jones
    The Oracle Learning Library has posted up the latest "Oracle By Example" labs giving an introduction to PHP & Python with the Oracle Database : Using PHP with Oracle Database 11g - a basic introduction Developing a PHP Web Application with Oracle Database 11g - a Zend Framework application using the NetBeans IDE Using Python With Oracle Database 11g - a basic introduction Using the Django Framework with Python and Oracle Database 11g - a basic web application

    Read the article

  • Python and Ruby in Oracle Tuxedo

    - by christopher.jones
    Did you know you can now develop services and applications in Python or Ruby with Oracle Tuxedo? The Tuxedo team have a blog post about it at Python and Ruby in Tuxedo. I used to think of Tuxedo as a Transaction Processing Monitor but it has evolved into much more.

    Read the article

  • Using Oracle Data in the Business Rules Engine

    - by Christopher House
    Yesterday I started working on some new functionality that I had planned to implement using the Business Rules Engine.  As I got further into it, I realized that some of my rules were going to need to reference some data that resides in an Oracle database.  I knew the Business Rules Composer supports using DataConnections and TypedDataTables, but I’d never used this functionality myself, so I wasn’t so sure how it would work with Oracle.  As it turns out, it’s very do-able, there’s just little hoop you need to jump through. I fired up BRC and my suspicions were quickly confirmed.  BRC only recognizes SQL Server databases when it comes to editing rules.  Not letting that deter me, I decided to see if I could “trick” BRE into using Oracle data. On my local SQL server, I created a new database and in that database, created a table that matched the schema of the table I wanted to use in the Oracle database.  I then set about creating my rules, referencing the new SQL Server database everywhere I wanted to use Oracle data.  Finally, I created a new class library and added a class that implements Microsoft.RuleEngine.IFactRetriever.  In that class, I added the necessary code to get a DataSet from the Oracle server, wrap it in a TypedDataTable and assert it into the rule engine.  It’s worth pointing out that in my IFactRetriever class, I made sure to set my DataSet name to the name of the database I’d referenced in the BRC and the DataTable’s name to the name of the table that I’d referenced in the BRC. After gac’ing the new class library and deploying my policy, I tested and everything worked as expected.

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • SSO Configuration MMC Snap-in

    - by Christopher House
    This may be old news to most people but I've been away from BizTalk for about a year, so this was a welcome development for me.  The other day, I was discussing with my client the various options for storing configuration data required by our project.  I brought up SSO as it's something I've used with success on previous projects.  The client hadn't previously used SSO and was concerned about the maintainability of configuration stored in SSO.  I offered to do a quick POC to demonstrate storing/retrieving/maintaining configuration via SSO.  As I set about creating the POC, I needed to download Richard Seroter's SSO configuration tool, since that's what I've used previously for managing SSO data.  I went to google to track it down and was pleasantly surprised to discover that Microsoft has finally released an MMC snap-in for maintaining SSO applications. The download contains three components.  The first is the MMC snap-in which allows you to create/delete applications as well as name/value pairs within an application.  Next is a C# class file, SSOConfigHelper.cs, which can be used to retrieve values from an SSO application.  Finally, there's an MSBuild task that allows you to deploy SSO application data with your builds. I didn't see any information as to which versions are supported, I'm using it in a BizTalk 2009 environment and it seems to work quite nicely.  The download package is available here.

    Read the article

  • Using Fiddler with BizTalk's HTTP Adapter

    - by Christopher House
    I'm working on an orchestration that's retrieving some data from a Java servlet.  The servlet takes a parameter string via HTTP post and returns POX (plain old XML, no SOAP here).  I was having trouble getting a valid response from the servlet when I was sending some test messages and wanted to see what my messages were looking like as they went across the wire.  Normally I was using WCF, I'd setup message logging, but since that's obviously not an option with the HTTP adapter, my thoughts turned to Fiddler.  A quick Google search turned up some promising results.  The posts I read all referred to using Fiddler with the SOAP adapter, but I thoght I could apply the same ideas to the HTTP adapter.  This led me to try setting the following context properties: HttpRequestMessage(HTTP.UseProxy) = true; HttpRequestMessage(HTTP.ProxyName) = "127.0.0.1"; HttpRequestMessage(HTTP.ProxyPort) = 8888; I rebuilt my orch, gac'd it, bounced my host and tried submitting a test message.  Fiddler was running but I didn't see any traffic show up.  I tried fully undeploying/redeploying my application and still, no traffic in Fiddler.  I was starting to think that BizTalk was ignoring the proxy settings.  To confirm this, I closed Fiddler and submitted a test message.  Sure enough, the orch ran to completion, proving that BizTalk was ignoring the proxy settings. I went back to my orch to see if there could be any other context proprties I needed to set.  I saw one that looked promising:  HTTP.UseHandlerProxySettings.  I set this to false, rebuilt my orch and this time when I submitted, I got an error message, which made sense, I didn't have Fiddler running.  I started up Fiddler, submitted another message and there it was, my HTTP traffic, just as I hoped.  And, I was quickly able to figure out what the problem was...I had forgotten to set HTTP.ContentType to application/x-www-form-urlencoded.

    Read the article

  • Reducing Oracle LOB Memory Use in PHP, or Paul's Lesson Applied to Oracle

    - by christopher.jones
    Paul Reinheimer's PHP memory pro tip shows how re-assigning a value to a variable doesn't release the original value until the new data is ready. With large data lengths, this unnecessarily increases the peak memory usage of the application. In Oracle you might come across this situation when dealing with LOBS. Here's an example that selects an entire LOB into PHP's memory. I see this being done all the time, not that that is an excuse to code in this style. The alternative is to remove OCI_RETURN_LOBS to return a LOB locator which can be accessed chunkwise with LOB->read(). In this memory usage example, I threw some CLOB rows into a table. Each CLOB was about 1.5M. The fetching code looked like: $s = oci_parse ($c, 'SELECT CLOBDATA FROM CTAB'); oci_execute($s); echo "Start Current :" . memory_get_usage() . "\n"; echo "Start Peak : " .memory_get_peak_usage() . "\n"; while(($r = oci_fetch_array($s, OCI_RETURN_LOBS)) !== false) { echo "Current :" . memory_get_usage() . "\n"; echo "Peak : " . memory_get_peak_usage() . "\n"; // var_dump(substr($r['CLOBDATA'],0,10)); // do something with the LOB // unset($r); } echo "End Current :" . memory_get_usage() . "\n"; echo "End Peak : " . memory_get_peak_usage() . "\n"; Without "unset" in loop, $r retains the current data value while new data is fetched: Start Current : 345300 Start Peak : 353676 Current : 1908092 Peak : 2958720 Current : 1908092 Peak : 4520972 End Current : 345668 End Peak : 4520972 When I uncommented the "unset" line in the loop, PHP's peak memory usage is much lower: Start Current : 345376 Start Peak : 353676 Current : 1908168 Peak : 2958796 Current : 1908168 Peak : 2959108 End Current : 345744 End Peak : 2959108 Even if you are using LOB->read(), unsetting variables in this manner will reduce the PHP program's peak memory usage. With LOBS in Oracle DB there is also DB memory use to consider. Using LOB->free() is worthwhile for locators. Importantly, the OCI8 1.4.1 extension (from PECL or included in PHP 5.3.2) has a LOB fix to free up Oracle's locators earlier. For long running scripts using lots of LOBS, upgrading to OCI8 1.4.1 is recommended.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >