Search Results

Search found 294 results on 12 pages for 'christopher maier'.

Page 1/12 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Error Installing ruby with RVM Single User mode on Arch Linux

    - by ChrisBurnor
    I've just installed RVM on ArchLinux x64 in single user mode via the recommended install script curl -L https://get.rvm.io | bash -s stable I've also installed all the requirements listed in rvm requirements However, I'm having trouble actually installing any version of ruby. And getting the following error: arch:~ % rvm install 1.9.3 No binary rubies available for: ///ruby-1.9.3-p194. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Fetching yaml-0.1.4.tar.gz to /home/christopher/.rvm/archives % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 460k 100 460k 0 0 702k 0 --:--:-- --:--:-- --:--:-- 767k Extracting yaml-0.1.4.tar.gz to /home/christopher/.rvm/src Prepare yaml in /home/christopher/.rvm/src/yaml-0.1.4. Configuring yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running ' ./configure --prefix=/home/christopher/.rvm/usr ', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/configure.log Compiling yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Installing Ruby from source to: /home/christopher/.rvm/rubies/ruby-1.9.3-p194, this may take a while depending on your cpu(s)... ruby-1.9.3-p194 - #downloading ruby-1.9.3-p194, this may take a while depending on your connection... ruby-1.9.3-p194 - #extracting ruby-1.9.3-p194 to /home/christopher/.rvm/src/ruby-1.9.3-p194 ruby-1.9.3-p194 - #extracted to /home/christopher/.rvm/src/ruby-1.9.3-p194 Skipping configure step, 'configure' does not exist, did autoreconf not run successfully? ruby-1.9.3-p194 - #compiling Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/make.log There has been an error while running make. Halting the installation. The log files are as follows: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/configure.log __rvm_log_command:32: permission denied: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/make.log make: *** No targets specified and no makefile found. Stop. arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/make.log make: *** No targets specified and no makefile found. Stop.

    Read the article

  • How to install the latest version of google earth on ubuntu 12.10 64-bit?

    - by user114769
    Chris here with a huge problem with the Google earth latest version. Gosh.. whenever I try to lunch the app this comes out. I'v Tried this web site nothing worked: (THANK YOU SO MUCH FOR EVEN READING THIS. MAY YOU GUYS HAVE A NICE DAY. christopher@christopher-E4300:~$ google-earth Google Earth has caught signal 11. We apologize for the inconvenience, but Google Earth has crashed. This is a bug in the program, and should never happen under normal circumstances. A bug report and debugging data have been written to this text file: /home/christopher/.googleearth/crashlogs/crashlog-50cbd67e.txt Please include this file if you submit a bug report to Google. https://help.ubuntu.com/community/GoogleEarth#Installing_the_.deb_file_downloaded_from_the_Google_Earth_Website Here is the content of /home/christopher/.googleearth/crashlogs/crashlog-50cbd67e.txt Major Version 7 Minor Version 0 Build Number 0001 Build Date Oct 29 2012 Build Time 19:13:39 OS Type 3 OS Major Version 3 OS Minor Version 5 OS Build Version 0 OS Patch Version 0 Crash Signal 11 Crash Time 1355535998 Up Time 0.789556 Stacktrace from glibc: ./libgoogleearth_free.so(+0x1e9cfb)[0xf757dcfb] ./libgoogleearth_free.so(+0x1e9f43)[0xf757df43] [0xf7726400]

    Read the article

  • Issues running commands

    - by Joel
    Every time I run a command I get this back. E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? christopher@christopher:~$ This didn't start happening until I changed my device name.

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • Why is this PHP loop rendering every row twice?

    - by Christopher
    I'm working on a real frankensite here not of my own design. There's a rudimentary CMS and one of the pages shows customer records from a MySQL DB. For some reason, it has no probs picking up the data from the DB - there's no duplicate records - but it renders each row twice. The page PHP is viewable at http://christopher.pastebin.com/DQkjjG3s (attempted to include in this post but it was horribly mangled, think it's important to have it all in context). I'm not the world's best PHP expert but I think I can see an error in a for loop when there is one... But everything looks ok to me. You'll notice that the customer name is clickable; clicking takes you to another page where you can view their full info as held in the DB - and for both rows, the customer ID is identical, and manually checking the DB shows there's no duplicate entries. The code is definitely rendering each row twice, but for what reason I have no idea. All pointers / advice appreciated.

    Read the article

  • How to update the text of a tag in XML using Elementree

    - by Christopher
    Using elementree, the easiest way to read the text of a tag is to do the following: import elementtree.ElementTree as ET sKeyMap = ET.parse("KeyMaps/KeyMap_Checklist.xml") host = sKeyMap.findtext("/BrowserInformation/BrowserSetup/host") Now I want to update the text in the same file, hopefully without having to re-write it with something easy like: host = "4444" sKeyMap.replacetext("/BrowserInformation/BrowserSetup/host") Any ideas? Thx in advance Christopher

    Read the article

  • Green bar on top of every (web) video, distorted colors too

    - by Walter Maier-Murdnelch
    Hey since a few days I have a problem with some videos I watch on the web (youtube, vimeo etc). There is a green bar on the top of the video and the colors are distorted, it looks like they are somehow shifted. I am not sure since when this problem appeared, I guess it might have been an update for the flash player. Anyways, I found a workaround. Disabling the hardware acceleration helps. right click on video - settings - disable hardware acceleration. After reloading of the page the green bar is gone. The problem is taht this setting doesn't seem to be persistent and so I have to disable it again on every other video. How can I make this setting permanent or how to get rid of the green bar alltogether? (I will add a screenshot later)

    Read the article

  • Is it a good Idea to switch to a SSD to use less battery?

    - by Walter Maier-Murdnelch
    I am thinking of buying a SSD for my laptop, mainly for the purpose of extended operating time when running on battery. At the moment I use a Hitachi HTS545032B9A300 (320GB) (Datasheet) as main drive and a Seagate Momentus 5400.3 120GB as secondary drive. I dualboot Windows and Linux but I don't need the windows partition any longer, a 120GB SDD would be more than sufficient space-wise. Speed is not an issue for me, I make heavy use of tmpfs (ramdrive) within Linux and transfers of bigger files are mainly through some network filesystem anyways, thus a cheaper SSD should do. For the purpose of comparison I chose the OCZ Vertex Plus 120GB. Power consumption always is a big promotional thing the industry uses to make me want to buy their SSDs, some sheet on the OCZ page provides an astonishing comparison of desktop HDDS and SSDs. The numbers I got comparing my laptop HDD and their SSD were not really astonishing any longer. Hitachi 320GB HDD: Startup (W, peak, max.) 4.5 Seek (W, avg.) 1.7 Read / Write (W, avg.) 1.4 Performance idle (W, avg.) 1.3 Active idle (W, avg.) 0.8 Low power idle (W, avg.) 0.5 Standby (W, avg.) 0.2 Sleep 0.1 OCZ 120GB SSD: 1.5W active 0.3W standby I see that there are differences, but actually they don't seem that high as I though they were. And compared to the power consuption of the rest of my system I wonder if it makes a difference at all. Have I just taken the wrong look at the whole thing or may I be better off to buy another battery for my laptop?

    Read the article

  • Top 10 Architect Community Articles for May 2014

    - by OTN ArchBeat
    One of the things I get to do as an OTN community manager is work with members of the architect community who want to spend extra time pounding the keyboard and risking carpal tunnel syndrome to publish articles on OTN. These articles typically cover—but are not limited to—middleware technologies (the other OTN community managers cover other technologies and product areas). Naturally, we track the popularity of these articles and use that information to help guide editorial decisions about the many article submissions we get. The list below represents the Top 10 most popular architect community articles for May 2014. (This list reflects only articles published between June 1, 2013 and May 31, 2014.) Cookbook: Installing and Configuring Oracle BI Applications 11.1.1.7.1 [August 2013] by Mark Rittman and Kevin McGinley Enterprise Service Bus [July 2013] by Jürgen Kress, Berthold Maier, Hajo Normann, Danilo Schmeidel, Guido Schmutz, Bernd Trops, Clemens Utschig-Utschig, Torsten Winterberg Back Up a Thousand Databases Using Enterprise Manager Cloud Control 12c [January 2014] by Porus Homi Havewala Set Up and Manage Oracle Data Guard using Oracle Enterprise Manager Cloud Control 12c [August 2013] by Porus Homi Havewala SOA and Cloud Computing [April 2014] by Jürgen Kress, Berthold Maier, Hajo Normann, Danilo Schmeidel, Guido Schmutz, Bernd Trops, Clemens Utschig-Utschig, Torsten Winterberg Building a Responsive WebCenter Portal Application [April 2014] by JayJay Zheng Using WebLogic 12c with Netbeans IDE by Markus Eisele Making the Move from Oracle Warehouse Builder to Oracle Data Integrator 12c by Stewart Bryson A Real-World Guide to Invoking OSB and EDN using C++ and Web Services [January 2014] by Sebastian Lik-Keung Ma Why Would Anyone Want to be an Architect? [May 2014] by Bob Rhubart If this list leaves you feeling inspired to write a technical article for OTN, or if you have questions about the process, drop me line in the comments section, below. I'll get back to you ASAP.

    Read the article

  • SOA & BPM Integration Days February 23rd & 24th 2011 Düsseldorf Germany

    - by Jürgen Kress
    The key German SOA Experts will present at the SOA & BPM Integration Days 2011 for all German SOA users it’s the key event in 2001, make sure you register today! Speakers include: Torsten Winterberg OPITZ Hajo Normann HP Guido Schmutz Trivadis Dirk Krafzig SOAPARK Niko Köbler OPITZ Clemens Utschig-Utschig Boehringer Ingelheim Nicolai Josuttis IT communication Bernd Trops SOPERA   Berthold Maier Oracle Deutschland Jürgen Kress Oracle EMEA The agenda is exciting for all SOA and BPA experts! See you in Düsseldorf Germany!   Es ist soweit: Die Entwickler Akademie und das Magazin Business Technology präsentieren Ihnen die ersten SOA, BPM & Integration Days! Das einzigartige Trainingsevent versammelt für Sie die führenden Köpfe im deutschsprachigen Raum rund um SOA,  BPM und Integration.  Zwei Tage lang erhalten Sie ohne Marketingfilter wertvolle Informationen, Erkenntnisse und  Erfahrungen aus der täglichen Projektarbeit. Sie müssen sich nur noch entscheiden, welche persönlichen Themenschwerpunkte Sie setzen möchten. Darüber hinaus bietet sich Ihnen die einmalige Chance, ihre Fragen und Herausforderungen mit den Experten aus der Praxis zu diskutieren. In den SOA, BPM & Integration Days profitieren Sie von der geballten Praxiserfahrung der Autorenrunde des SOA Spezial Magazins (Publikation des Software & Support Verlags), bekannter Buchautoren und langjährigen Sprechern der JAX-Konferenzen. Dieses Event sollten Sie auf keinen Fall verpassen!  Details und Anmeldung unter: www.soa-bpm-days.de For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA & BPM Integration Days,SOA,BPM,Hajo Normnn,Torsten Winterberg,Clemens Utschig-Utschig,Berthold Maier,Bernd Trops,Guido Schmutz,Nicolai Josuttis,Niko Köbler,Dirk Krafzig,Jürgen Kress,Oracle,Jax,Javamagazin,entwickler adademiem,business technology

    Read the article

  • Industrialized SOA – topic of Business Technology Magazine

    - by JuergenKress
    Although it has become quieter around SOA, the concept is not buried at all. On the contrary, over the years it has reached a new maturity level. Hypes such as Cloud Computing and Big Data have pushed SOA out of the headlines; however "the new hypes have not replace service orientation, but built on it." The authors of this edition rank among to the SOA pioneers in Germany. They have gathered their collective knowledge for this issue and created a unique picture of the current state of SOA. According to them SOA has developed evolutionarily towards industrialization, towards a holistic platform - and thus towards a new Industrialized SOA. The issue 3.12 of the BT magazine (in Germany!) is available as an iPad App (http://it-republik.de/business-technology/bt-magazin-ipad-app), via mail (http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html) or at the kiosk! The magazine is published by: Berthold Maier Jürgen Kress Hajo Normann Danilo Schmiedel Guido Schmutz Bernd Trops Clemens Utschig-Utschig Torsten Winterberg For more information see www.bt-magazin.de SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Technorati Tags: Industrial SOA,Industrialized SOA,Berthold Maier,Hajo Normann,Danilo Schmiedel,Guido Schmutz,Bernd Trops,Clemens Utschig-Utschig,Torsten Winterberg,SOA Spezial II,Business Technology Magazin,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Wix, Launch application after installation complete, with UAC turned on.

    - by Christopher Roy
    Good day. I've been building an installer for our product using the WIX(Windows Installer XML) technology. The expected behavior is that the product is launched, if the check box is checked after installation. This has been working for some time now, but we found out recently that UAC of Win 7, and Vista is stopping the application from launching. I've done some research and it has been suggested to me that I should add the attributes Execute='deferred' and Impersonate='no'. Which I did, but then found out that to execute deferred, the CustomAction has to be performed, between the InstallInitialize, and IntallFinalize phases; which is not what I need. I need the product to launch AFTER install finalize, IF the launch checkbox is checked. Is there any other way to elevate permissions? Any and all answers, suggestions, or resonings will be appreciated. Cheers, Christopher Roy.

    Read the article

  • Win32 scrolling examples

    - by Christopher
    Could anyone point me to (or provide?) some nice, clear examples of how to implement scrolling in Win32? Google brings up a lot of stuff, obviously, but most examples seem either too simple or too complicated for me to be sure that they demonstrate the right way of doing things. I use LispWorks CAPI (cross-platform Common Lisp GUI lib) in my current project, and on Windows I have a hard-to-figure-out bug relating to scrolling; basically I want to do some tests directly via the Win32 API to see if I can shed some light on the situation. Many thanks, Christopher

    Read the article

  • Can't get automated release working with Hudson + Git + Maven Release Plugin

    - by Christopher Maier
    As the title says, I'm trying to get an automated release job working on Hudson. It's a Maven project, and all the code is in Git. Manually, I do the release on my personal machine like so: git checkout master mvn -B release:prepare release:perform This works perfectly. The Maven release plugin properly pushes the release tag to the origin repository as well as the next commit that bumps the version to the next SNAPSHOT. However, when I run this same Maven job through Hudson (either by creating my own "release" job or by using the M2 Release Plugin) it doesn't work so well. The release tag gets pushed out to the origin repository, and the release gets pushed out to our Nexus repository, but the subsequent commit that bumps the version to the next SNAPSHOT doesn't go out. Furthermore, the "master" branch in the origin repository doesn't get changed at all. I've looked in Hudson's workspace for the job, however, and the version has been updated. After looking at the output from the Hudson job, it appears that the Git plugin does not actually checkout "master", but rather it's SHA1 id. That is, if the "master" branch label points to commit "f6af76f541f1a1719e9835cdb46a183095af6861", Hudson does git checkout -f f6af76f541f1a1719e9835cdb46a183095af6861 instead of git checkout -f master As a result, the changes that the Maven release plugin is making are not actually on any branch (certainly not on "master") and these changes don't make it to the origin repository. It runs on the right code, but bookkeeping-wise, the changes seem to get lost because no branch label points to them. Has anybody gotten the Hudson + Git + Maven Release Plugin combo to work properly? Is there some additional configuration somewhere I can set to make this happen? Or is this a bug in the Hudson Git plugin? Thanks in advance.

    Read the article

  • Constructor/Destructor involving a class and a struct

    - by Bogdan Maier
    I am working on a program and need to make an array of objects, specifically I have a 31x1 array where each position is an object, (each object is basically built out of 6 ints). Here is what I have but something is wrong and i could use some help thank you. 31x1 struct header" const int days=31; struct Arr{ int days; int *M; }; typedef Arr* Array; 31x1 matrix constructor: void constr(){ int *M; M = new Expe[31]; // Expe is the class class header: class Expe { private: //0-HouseKeeping, 1-Food, 2-Transport, 3-Clothing, 4-TelNet, 5-others int *obj; } Class object constructor: Expe::Expe() { this->obj=new int[6]; } help please... because i`m pretty lost.

    Read the article

  • Access <body> tag properties from form found in mediaboxAdvanced lightbox.

    - by Christopher Richa
    I am building my portfolio website simply using HTML, Flash, and the Mootools Javascript framework. I have decided to implement a theme changing feature using Javascript and cookies. When the user clicks on the "Change Theme" link, a mediaboxAdvanced lightbox appears, containing a real-time form which allows you to change the theme on the portfolio. Here's the code for the form: <div id="mb_inline" style="display: none; margin:15px;"> <h3>Change Your Theme</h3> <form> <fieldset> <select id="background_color" name="background_color"> <option value="#dcf589">Original</option> <option value="#000FFF">Blue</option> <option value="#00FF00">Green</option> </select> </fieldset> </form> </div> I know, there is no submit button, but as soon as something is changed in that form, the following Mootools code happens: var themeChanger = new Class( { pageBody : null, themeOptions : null, initialize: function() { this.pageBody = $(document.body); this.themeOptions = $('background_color'); this.themeOptions.addEvent('change', this.change_theme.bindWithEvent(this)); }, change_theme: function(event) { alert("Hello"); } }); window.addEvent('domready', function() { var themeChanger_class = new themeChanger(); }); Now this is only a test function, which should be triggered when the dropdown menu changes. However, it seems that none of it works when the form is in the lightbox! Now if I decide to run the form outside of the lightbox, then it works great! Am I missing something? If you need more examples, I will fill in on demand. Thank you all in advance. -Christopher

    Read the article

  • Using PHP OCI8 with 32-bit PHP on Windows 64-bit

    - by christopher.jones
    The world migration from 32-bit to 64-bit operating systems is gaining pace. However I've seen a couple of customers having difficulty with the PHP OCI8 extension and Oracle DB on Windows 64-bit platforms. The errors vary depending how PHP is run. They may appear in the Apache or PHP log: Unable to load dynamic library 'C:\Program Files (x86)\PHP\ext\php_oci8_11g.dll' - %1 is not a valid Win32 application. or Warning oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that PATH includes the directory with Oracle Instant Client libraries Other than IIS permission issues a common cause seems to be trying to use PHP with libraries from an Oracle 64-bit database on the same machine. There is currently no 64-bit version of PHP on http://php.net/ so there is a library mismatch. A solution is to install Oracle Instant Client 32-bit and make sure that PHP uses these libraries, while not interferring with the 64-bit database on the same machine. Warning: The following hacky steps come untested from a Linux user: Unzip Oracle Instant Client 32-bit and move it to C:\WINDOWS\SYSWOW64\INSTANTCLIENT_11_2. You may need to do this in a console with elevated permissions. Edit your PATH environment variable and insert C:\WINDOWS\SYSTEM32\INSTANTCLIENT_11_2 in the directory list before the entry for the Oracle Home library. Windows makes it so all 32-bit applications that reference C:\WINDOWS\SYSTEM32 actually see the contents of the C:\WINDOWS\SYSWOW64 directory. Your 64-bit database won't find an Instant Client in the real, physical C:\WINDOWS\SYSTEM32 directory and will continue to use the database libraries. Some of our Windows team are concerned about this hack and prefer a more "correct" solution that (i) doesn't require changing the Windows system directory (ii) doesn't add to the "memory" burden about what was configured on the system (iii) works when there are multiple database versions installed. The solution is to write a script which will set the 64-bit (or 32-bit) Oracle libraries in the path as needed before invoking the relevant bit-ness application. This does have a weakness when the application is started as a service. As a footnote: If you don't have a local database and simply need to have 32-bit and 64-bit Instant Client accessible at the same time, try the "symbolic" link approach covered in the hack in this OTN forum thread. Reminder warning: This blog post came untested from a Linux user.

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • Learn to use PHP and Python with Oracle Database

    - by christopher.jones
    The Oracle Learning Library has posted up the latest "Oracle By Example" labs giving an introduction to PHP & Python with the Oracle Database : Using PHP with Oracle Database 11g - a basic introduction Developing a PHP Web Application with Oracle Database 11g - a Zend Framework application using the NetBeans IDE Using Python With Oracle Database 11g - a basic introduction Using the Django Framework with Python and Oracle Database 11g - a basic web application

    Read the article

  • Python and Ruby in Oracle Tuxedo

    - by christopher.jones
    Did you know you can now develop services and applications in Python or Ruby with Oracle Tuxedo? The Tuxedo team have a blog post about it at Python and Ruby in Tuxedo. I used to think of Tuxedo as a Transaction Processing Monitor but it has evolved into much more.

    Read the article

  • Using Oracle Data in the Business Rules Engine

    - by Christopher House
    Yesterday I started working on some new functionality that I had planned to implement using the Business Rules Engine.  As I got further into it, I realized that some of my rules were going to need to reference some data that resides in an Oracle database.  I knew the Business Rules Composer supports using DataConnections and TypedDataTables, but I’d never used this functionality myself, so I wasn’t so sure how it would work with Oracle.  As it turns out, it’s very do-able, there’s just little hoop you need to jump through. I fired up BRC and my suspicions were quickly confirmed.  BRC only recognizes SQL Server databases when it comes to editing rules.  Not letting that deter me, I decided to see if I could “trick” BRE into using Oracle data. On my local SQL server, I created a new database and in that database, created a table that matched the schema of the table I wanted to use in the Oracle database.  I then set about creating my rules, referencing the new SQL Server database everywhere I wanted to use Oracle data.  Finally, I created a new class library and added a class that implements Microsoft.RuleEngine.IFactRetriever.  In that class, I added the necessary code to get a DataSet from the Oracle server, wrap it in a TypedDataTable and assert it into the rule engine.  It’s worth pointing out that in my IFactRetriever class, I made sure to set my DataSet name to the name of the database I’d referenced in the BRC and the DataTable’s name to the name of the table that I’d referenced in the BRC. After gac’ing the new class library and deploying my policy, I tested and everything worked as expected.

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >