Search Results

Search found 42629 results on 1706 pages for 'dry run'.

Page 608/1706 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • 32 Stunning Movie Tributes in LEGO

    - by Jason Fitzpatrick
    These impressive Sci-Fi LEGO tributes are an impressive combination of time, money, and a whole lot of LEGO bricks. Read on to see everything from Death Star hangers to adorable robots. Over at Dvice, a SyFy channel blog, they’ve rounded up 32 impressive movie tributes crafted entirely in LEGO bricks. The model seen above, for example, is composed of 30,000 bricks and is over six feet on a side. Planning on building your own? You’d better have $2,300 to blow on bricks and six months of spare time to invest. Hit up the link below for more LEGO tributes. 32 Fan-Built LEGO Tributes to Science Fiction [Dvice] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • gksudo waits for a few seconds after execution

    - by phoenix
    i'm frequently using application launchers to run personal bash scripts and thus i often use gksudo in case i do administrative tasks. the problem is that when i execute a command with gksudo,the execution is successful, but afterwards gksudo waits for about 5 seconds before it closes/finishes. in some scripts i use gksudo multiple times, resulting in execution times of a few minutes, even though everything should be done in a few seconds. can anyone help me here? ps: here are my main /etc/sudoers-settings (might have something to do with my problem): Defaults env_reset,!tty_tickets,timestamp_timeout=2 phoenix ALL= NOPASSWD: /bin/mount,/bin/umount,/usr/sbin/firestarter,/usr/bin/truecrypt,/usr/bin/apt-get

    Read the article

  • How to install Joomla in Ubuntu as localhost?

    - by Leon-TastyDev
    I have installed the lamp-server but i stuck in one certain step.. Were it says remove installation folder if i click the button "Remove installation folder:" it says "error" and i guess this is because it doesn't have root previlages. So i run sudo nautilus go to the folder var/www, and remove the folder installation.. So now when i try to access my localhost No configuration file found and no installation code available. Exiting... I am confused, it asked me to remove it and now that is removed it needs it..:?

    Read the article

  • Plan your SharePoint 2010 Content Type Hub carefully

    - by Wayne
    Currently setting up a new environment on SharePoint 2010 (which was made available for download yesterday if anyone missed that :-). One of the new features of SharePoint 2010 is to set up a Content Type Hub (which is a part of the Metadata Service Application), which is a hub for all Content Types that other Site Collections can subscribe to. That is you only need to manage your content types in one location. Setting up the Content Type Hub is not that difficult but you must make it very careful to avoid a lot of work and troubleshooting. Here is a short tutorial with a few tips and tricks to make it easy for you to get started. Determine location of Content Type Hub First of all you need to decide in which Site Collection to place your Content Type Hub; in the root site collection or a specific one. I think using a specific Site Collection that only acts as a Content Type Hub is the best way, there are no best practice as of now. So I create a new Site Collection, at for instance http://server/sites/CTH/. The top-level site of this site collection should be for instance a Team Site. You cannot use Blank Site by default, which would have been the best option IMHO, since that site does not have the Taxonomy feature stapled upon it (check the TaxonomyFeatureStapler feature for which site templates that can be used). Configure Managed Metadata Service Application Next you need to create your Managed Metadata Service Application or configure the existing one, Central Administration > Application Management > Manage Service Applications. Select the Managed Metadata service application and click Properties if you already have created it. In the bottom of the dialog window when you are creating the service application or when you are editing the properties is a section to fill in the Content Type Hub. In this text box fill in the URL of the Content Type Hub. It is essential that you have decided where your Content Type Hub will reside, since once this is set you cannot change it. The only way to change it is to rebuild the whole managed metadata service application! Also make sure that you enter the URL correctly. I did copy and paste the URL once and got the /default.aspx in the URL which funked the whole service up. Make sure that you only use the URL to the Site Collection of the hub. Now you have to set up so that other Site Collections can consume the content types from the hub. This is done by selecting the connection for the managed metadata service application and clicking properties. A new dialog window opens and there you need to click the Consumes content types from the Content Type Gallery at nnnn. Now you are free to syndicate your Content Types from the Hub. Publish Content Types To publish a Content Type from the hub you need to go to Site Settings > Content Types and select the content type that you would like to publish. Then select Manage publishing for this content type. This takes you to a page from where you can Publish, Unpublish or Republish the content type. Once the content type is published it can take up to an hour for the subscribing Site Collections to get it. This is controlled by the Content Type Subscriber job that is scheduled to run once an hour. To speed up your publishing just go to Central Administration > Monitoring > Review Job Definitions > Content Type Subscriber and click Run now and you content type is very soon available for use. Published Content Type status You can check the status of the content type publishing in your destination site collections by selecting Site Settings > Content Type Publishing. From here you can force a refresh of all subscribed content types, see which ones that are subscribed and finally check the publishing error log. This error log is very useful for detecting errors during the publishing. For instance if you use any features such as ratings, metadata, document ids in your content type hub and your destination site collection does not have those features available this will be reported here.

    Read the article

  • Different behaviour with windows authentication on IIS7 websites

    - by amaters
    I need to run a website with just windows authentication. Given the following situation: The location of the default website is: c:\inetpub\wwwroot The location of my code is: c:\Sites\WebApp my hostfile is edited so any .local i use points to 127.0.0.1 I have created a new application called 'AppX' underneath the default website and point it to c:\Sites\WebApp. It will use the DefaultappPool. When I switch off anonymous and switch on windows authentication all works well when I go to localhost/AppX/. What i really want is a new website (No need to question why I want this). So I created Website2 and did exact the same creation of the application. Everything is the same; destination, app pool and authentication. Now when I browse to this website web2.local/AppX/ I get the 401.2 - Unauthorized error. What am I missing here?

    Read the article

  • Cannot mount Android phone in Ubuntu and sync with Banshee

    - by Brett Alton
    I can't get my LG Optimus One to sync with Banshee. I read somewhere that the root needs to have an empty file called '.is_audio_player'. I did that and it still doesn't mount. I ran dmesg however and it appears that the card is unmounting before I even have a change to run Banshee. [ 7250.321359] usb 1-1.4: new high speed USB device using ehci_hcd and address 10 [ 7250.444795] scsi12 : usb-storage 1-1.4:1.0 [ 7251.567946] scsi 12:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 7251.568839] sd 12:0:0:0: Attached scsi generic sg3 type 0 [ 7252.232433] sd 12:0:0:0: [sdc] 15564800 512-byte logical blocks: (7.96 GB/7.42 GiB) [ 7252.233299] sd 12:0:0:0: [sdc] Write Protect is off [ 7252.233306] sd 12:0:0:0: [sdc] Mode Sense: 03 00 00 00 [ 7252.233309] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235658] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235666] sdc: sdc1 [ 7252.239132] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.239140] sd 12:0:0:0: [sdc] Attached SCSI removable disk [ 7272.573437] usb 1-1.4: USB disconnect, address 10 Suggestions?

    Read the article

  • ADF Faces now in Eclipse

    - by shay.shmeltzer
    The new version of Oracle Enterprise Pack for Eclipse was just release, and one of the key new feature it offers is integration of Oracle ADF Faces development in Eclipse. If you are serious about developing with JSF, you probably know by now that ADF Faces is the richest set of components out there both in terms of number of components and also the functionality they offer. The components offer a lot of Ajax functionality out of the box, and the framework also offers windowing, drag and drop, push, Javascript API, skinning and much more. OEPE makes it simple to build with ADF Faces and test run your application. Here is a basic tutorial that will get you all set up to use this combination. Once you do that, you can then do this:

    Read the article

  • Pancake.io Is a Dead Simple Way to Host a Web Site from Your Dropbox Account

    - by Jason Fitzpatrick
    Pancake.io is a web-based app that makes it dead simple to use your Dropbox account as as simple web host. Signup for an account and Pancake.io creates a folder in your Dropbox. You can modify the page in one of two ways: you can simply put files into the folder and use the simple template provided by Pancake.io to share them or you can edit the template (located in the Pancake.io folder) to customize the page. Hit up the link below to read more about Pancake.io and take it for a test drive. Pancake.io [via ReadWriteWeb] HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way

    Read the article

  • Developing for iOS on Linux

    - by Jay
    I am looking for an engine or library to develop a game for iOS on Linux. High level, low level, GUI, no GUI, does not matter too much. I am really looking for anything. I'm not actually talking about deploying to iOS from Linux or anything like. I just want to do the bulk of the work on Linux, with minimal changes required to run it on iOS. Edit: YES, I do have access to a Mac, but it is limited. So I want to be able to work on the project on my regular Ubuntu box. Also, I am in the paid developer program, so I can deploy to iOS devices from the Mac.

    Read the article

  • Windows 7 restarts PC when selected from GRUB Menu

    - by Dan Still
    I installed windows 7 on a RAID 5 (2@160GB SATA +1@160GB SATA for RAID 5) I then proceeded to install Ubuntu 11.10 using the Live CD and opted: "Install along side Windows 7 Option" Upon boot GRUB appears normally and I can select and run Ubuntu with no difficulties. When I select Windows 7 from GRUB the PC restarts and consequently goes back to GRUB. I have attempted to use the Windows 7 DVD to repair the installation but to no avail. The Wizard ran twice as it described it might, after the second attempt came back with an '...inability to repair...' error. I am sure there is an answer to this somewhere but I have yet to be able to find it. (2 weeks and countless attempts and searches before posting this question. Although I am happy to use Ubuntu alone my wife likes to watch Netflix and therein requires the Windows 7 installation. Any answers are appreciated and welcomed. Thanks in advance. Dan Still

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • How to Sync Any Folder With SkyDrive on Windows 8.1

    - by Chris Hoffman
    Before Windows 8.1, it was possible to sync any folder on your computer with SkyDrive using symbolic links. This method no longer works now that SkyDrive is baked into Windows 8.1, but there are other tricks you can use. Creating a symbolic link or directory junction inside your SkyDrive folder will give you an empty folder in your SkyDrive cloud storage. Confusingly, the files will appear inside the SkyDrive Modern app as if they were being synced, but they aren’t. The Solution With SkyDrive refusing to understand and accept symbolic links in its own folder, the best option is probably to use symbolic links anyway — but in reverse. For example, let’s say you have a program that automatically saves important data to a folder anywhere on your hard drive — whether it’s C:\Users\USER\Documents\, C:\Program\Data, or anywhere else. Rather than trying to trick SkyDrive into understanding a symbolic link, we could instead move the actual folder itself to SkyDrive and then use a symbolic link at the folder’s original location to trick the original program. This may not work for every single program out there. But it will likely work for most programs, which use standard Windows API calls to access folders and save files. We’re just flipping the old solution here — we can’t trick SkyDrive anymore, so let’s try to trick other programs instead. Moving a Folder and Creating a Symbolic Link First, ensure no program is using the external folder. For example, if it’s a program data or settings folder, close the program that’s using the folder. Next, simply move the folder to your SkyDrive folder. Right-click the external folder, select Cut, go to the SkyDrive folder, right-click and select Paste. The folder will now be located in the SkyDrive folder itself, so it will sync normally. Next, open a Command Prompt window as Administrator. Right-click the Start button on the taskbar or press Windows Key + X and select Command Prompt (Administrator) to open it. Run the following command to create a symbolic link at the original location of the folder: mklink /d “C:\Original\Folder\Location” “C:\Users\NAME\SkyDrive\FOLDERNAME\” Enter the correct paths for the exact location of the original folder and the current location of the folder in your SkyDrive. Windows will then create a symbolic link at the folder’s original location. Most programs should hopefully be tricked by this symbolic location, saving their files directly to SkyDrive. You can test this yourself. Put a file into the folder at its original location. It will be saved to SkyDrive and sync normally, appearing in your SkyDrive storage online. One downside here is that you won’t be able to save a file onto SkyDrive without it taking up space on the same hard drive SkyDrive is on. You won’t be able to scatter folders across multiple hard drives and sync them all. However, you could always change the location of the SkyDrive folder on Windows 8.1 and put it on a drive with a larger amount of free space. To do this, right-click the SkyDrive folder in File Explorer, select Properties, and use the options on the Location tab. You could even use Storage Spaces to combine the drives into one larger drive. Automatically Copy the Original Files to SkyDrive Another option would be to run a program that automatically copies files from another folder on your computer to your SkyDrive folder. For example, let’s say you want to sync copies of important log files that a program creates in a specific folder. You could use a program that allows you to schedule automatic folder-mirroring, configuring the program to regularly copy the contents of your log folder to your SkyDrive folder. This may be a useful alternative for some use cases, although it isn’t the same as standard syncing. You’ll end up with two copies of the files taking up space on your system, which won’t be ideal for large files. The files also won’t be instantly uploaded to your SkyDrive storage after they’re created, but only after the scheduled task runs. There are many options for this, including Microsoft’s own SyncToy, which continues to work on Windows 8. If you were using the symbolic link trick to automatically sync copies of PC game save files with SkyDrive, you could just install GameSave Manager. It can be configured to automatically create backup copies of your computer’s PC game save files on a schedule, saving them to SkyDrive where they’ll be synced and backed up online. SkyDrive support was completely rewritten for Windows 8.1, so it’s not surprising that this trick no longer works. The ability to use symbolic links in previous versions of SkyDrive was never officially supported, so it’s not surprising to see it break after a rewrite. None of the methods above are as convenient and quick as the old symbolic link method, but they’re the best we can do with the SkyDrive integration Microsoft has given us in Windows 8.1. It’s still possible to use symbolic links to easily sync other folders with competing cloud storage services like Dropbox and Google Drive, so you may want to consider switching away from SkyDrive if this feature is critical to you.     

    Read the article

  • Why does my Perl CGI script raise an internal server error on Apache?

    - by itcplpl
    I've installed apache2 on Ubuntu 11.04, and localhost is working. I created a simple printenv.pl script and put it in the following directory $ mv printenv.pl /usr/lib/cgi-bin/ $ chmod +rx /usr/lib/cgi-bin/printenv.pl However when I go to http://127.0.0.1/cgi-bin/printenv.pl, I get a 500 Internal Server Error I checked the error log at /var/log/apache2, and this is what it says: [Mon Oct 24 11:04:25 2011] [error] (13)Permission denied: exec of '/usr/lib/cgi-bin/printenv.pl' failed [Mon Oct 24 11:04:25 2011] [error] [client 127.0.0.1] Premature end of script headers: printenv.pl Any suggestions on how I can fix this and run CGI scripts on my localhost?

    Read the article

  • 2011 PASS Board Applicants: Rob Farley

    - by andyleonard
    Introduction I am interviewing 2011 PASS Board Nominee Applicants. As listed on the PASS Board Elections site the applicants are: Rob Farley Geoff Hiten Adam Jorgensen Denise McInerney Sri Sridharan Kendal Van Dyke I'm asking everyone the same questions and blogging the responses in the order received. Rob Farley is first up: Interview With Rob Farley 1. What's your day job? I run LobsterPot Solutions out of Adelaide, Australia. We're a SQL & BI consultancy, and were the first Microsoft Partner...(read more)

    Read the article

  • Relationship between TDD and Software Architecture/Design

    - by Christopher Francisco
    I'm new to TDD and have been reading the theory since applying it is more complicated than it sounds when you're learning by yourself. As far as I know, the objective is to write test cases for each requirement and run the test so it fails (to prevent a false positive). Afterward, you should write the minimum amount of code that can pass the test and move to the next one. That being said, is it true that you get a fast development, but what about the code itself? this theory makes me think you are not considering things like abstraction, delegation of responsibilities, design patterns, architecture and others since you're just writing "the minimum amount of code that can pass the test". I know I'm probably wrong because if this were true, we'd have a lot of crappy developers with poor software architecture and documentation so I'm asking for a guide here, what's the relationship between TDD and Software Architecture/Design?

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Big label generator

    - by jamiet
    Sometimes I write blog posts mainly so that I can find stuff when I need it later. This is such a blog post. Of late I have been writing lots of deployment scripts and I am fan of putting big labels into deployment scripts (which, these days, reside in SSDT) so one can easily see what’s going on as they execute. Here’s such an example from my current project: which results in this being displayed when the script is run: In case you care….PM_EDW is the name of one of our databases. I’m almost embarrassed to admit that I spent about half an hour crafting that and a few others for my current project because a colleague has just alerted me to a website that would have done it for me, and given me lots of options for how to present it too: http://www.patorjk.com/software/taag/#p=testall&f=Banner3&t=PM__EDW Very useful indeed. Nice one! And yes, I’m sure there are a myriad of sites that do the same thing - I’m a latecomer, ok? @Jamiet

    Read the article

  • SQLUniversity Professional Development Week: Learning To Fly

    - by andyleonard
    Introduction Clem and Jim Bob were out hunting the other day in the woods south of Farmville. As they crossed a ridge, they came upon a big ol' Momma Bear and her cub. The larger bear immediately started towards them. Jim Bob took off running as fast as he could. He stopped when he realized Clem wasn't with him. And when he saw Clem reaching into his pack, Jim Bob was incredulous: "Hurry Clem! That bar's comin' fast! You need to out run 'er!" Clem kicked off his boots and pulled running shoes out...(read more)

    Read the article

  • How to recognize special function keys on keyboard

    - by NikolaiDante
    I have a Microsoft Digital Media 3000 Keyboard. None of the function keys or other special keys seem to do anything, what do I need to do to get them working (at the very least f2, as not having a shortcut to rename a file is driving me mad) If I run xev and press f2 I get the following output in the terminal: KeyPress event, serial 36, synthetic NO, window 0x4800001, root 0x15d, subw 0x0, time 42858728, (674,456), root:(1034,588), state 0x10, keycode 139 (keysym 0xff65, Undo), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4800001, root 0x15d, subw 0x0, time 42858912, (674,456), root:(1034,588), state 0x10, keycode 139 (keysym 0xff65, Undo), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • what is wrong with this easy script

    - by alex
    what is wrong with this easy script? I just want to write an script which change my directory: A. I put below commands on the file witch its name is pathABC on the /home/alex directory, #!/bin/sh cd /home/alex/Documents/A/B/C echo HelloWorld B. also I did chmod +x pathABC , On the terminal when I am on the /home/alex directory, I run ./pathABC . But the output is just HelloWorld and the current directory remains with no change. I mean my directory remains as /home/alex and not go to the /home/alex/Documents/A/B/C. So where is wrong?

    Read the article

  • Ghost Incognito Automatically Loads Incognito Mode Based on Domain

    - by Jason Fitzpatrick
    Chrome: Ghost Incognito mode is a simple Chrome extension that automatically launches Incognito mode on a domain-by-domain basis. If you routinely visit the same sites using Incognito Mode, Ghost Incognito allows you to flag domains. By default it turns on Incognito for all .XXX domains and, once you select some domains, for any that you specify. Thus if you flag angrybirds.com, as we did for our test run of the app, every time you visit angrybirds.com or a sub-domain there of such as shop.angrybirds.com, you’ll be automatically directed to a new Incognito tab–no input from you necessary. Ghost Incognito is free, Chrome only. Ghost Incognito [via Addictive Tips] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • Ubuntu 13.10 AdobeAIR application: Failed to load module "unity-gtk-module"

    - by nobuzz
    I have installed AdobeAIR on 13.10, but am getting the following error messages when using: Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Has someone faced this issue. I looked at comment giving a possible resolution here , but that didn't help, as I already have those packages installed. Basically to install AdobeAIR, being a 32-bit, I had to install various i386 packages: libgtk2.0-0:i386 libnss3-1d:i386 libnspr4-0d:i386 lib32nss-mdns libxml2:i386 libxslt1.1:i386 When I run the program as root, it opens up properly without errors. But that is not an option. Could it be a problem that necessary modules are not getting loaded on non-root user's environment.

    Read the article

  • Why not Green Threads?

    - by redjamjar
    Whilst I know questions on this have been covered already (e.g. http://stackoverflow.com/questions/5713142/green-threads-vs-non-green-threads), I don't feel like I've got a satisfactory answer. The question is: why don't JVM's support green threads anymore? It says this on the code-style Java FAQ: A green thread refers to a mode of operation for the Java Virtual Machine (JVM) in which all code is executed in a single operating system thread. And this over on java.sun.com: The downside is that using green threads means system threads on Linux are not taken advantage of and so the Java virtual machine is not scalable when additional CPUs are added. It seems to me that the JVM could have a pool of system processes equal to the number of cores, and then run green threads on top of that. This could offer some big advantages when you have a very number large of threads which block often (mostly because current JVM's cap the number of threads). Thoughts?

    Read the article

  • How can I write only to the stencil buffer in OpenGL ES 2.0?

    - by stephelton
    I'd like to write to the stencil buffer without incurring the cost of my expensive shaders. As I understand it, I write to the stencil buffer as a 'side effect' of rendering something. In this first pass where I write to the stencil buffer, I don't want to write anything to the color or depth buffer, and I definitely don't want to run through my lighting equations in my shaders. Do I need to create no-op shaders for this (and can I just discard fragments), or is there a better way to do this? As the title says, I'm using OpenGL ES 2.0. I haven't used the stencil buffer before, so if I seem to be misunderstanding something, feel free to be verbose.

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >