Search Results

Search found 15535 results on 622 pages for 'mat keep'.

Page 543/622 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • ArchBeat Link-o-Rama Top 10 for October 21-27, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 21-27, 2012. OTN Architect Day: Los Angeles This is your brain on IT architecture. Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. [NOTE: The event was last week, of course. Big thanks to the session presenters and especially to those Angelinos who came out for the event.] WebLogic Server 11gR1 Interactive Quick Reference"The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Podcast: Are You Future Proof? The latest OTN ArchBeat Podcast series features Oracle ACE Directors Ron Batra, Basheer Khan, and Ronald van Luttikhuizen, three practicing architects in an open discussion about how changes in enterprise IT are raising the bar for success for software architects and developers. Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as 'TEN." My score? Never mind. Advanced Oracle SOA Suite OOW 2012 PresentationsThe Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes contain the articles we’ve written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Oracle’s Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark RittmanOracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. OW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Thought for the Day "Not everything that can be counted counts, and not everything that counts can be counted." — Albert Einstein (March 14, 1879 – April 18, 1955) Source: Quotes For Software Engineers

    Read the article

  • To SYNC or not to SYNC – Part 4

    - by AshishRay
    This is Part 4 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In Part 3, I talked in details regarding why Data Guard SYNC is a good thing, and the distance implications you have to keep in mind. In this final article of the series, I will talk about how you can nicely complement Data Guard SYNC with the ability to failover in seconds. Wait - Did I Say “Seconds”? Did I just say that some customers do Data Guard failover in seconds? Yes, Virginia, there is a Santa Claus. Data Guard has an automatic failover capability, aptly called Fast-Start Failover. Initially available with Oracle Database 10g Release 2 for Data Guard SYNC transport mode (and enhanced in Oracle Database 11g to support Data Guard ASYNC transport mode), this capability, managed by Data Guard Broker, lets your Data Guard configuration automatically failover to a designated standby database. Yes, this means no human intervention is required to do the failover. This process is controlled by a low footprint Data Guard Broker client called Observer, which makes sure that the primary database and the designated standby database are behaving like good kids. If something bad were to happen to the primary database, the Observer, after a configurable threshold period, tells that standby, “Your time has come, you are the chosen one!” The standby dutifully follows the Observer directives by assuming the role of the new primary database. The DBA or the Sys Admin doesn’t need to be involved. And - in case you are following this discussion very closely, and are wondering … “Hmmm … what if the old primary is not really dead, but just network isolated from the Observer or the standby - won’t this lead to a split-brain situation?” The answer is No - It Doesn’t. With respect to why-it-doesn’t, I am sure there are some smart DBAs in the audience who can explain the technical reasons. Otherwise - that will be the material for a future blog post. So - this combination of SYNC and Fast-Start Failover is the nirvana of lights-out, integrated HA and DR, as practiced by some of our advanced customers. They have observed failover times (with no data loss) ranging from single-digit seconds to tens of seconds. With this, they support operations in industry verticals such as manufacturing, retail, telecom, Internet, etc. that have the most demanding availability requirements. One of our leading customers with massive cloud deployment initiatives tells us that they know about server failures only after Data Guard has automatically completed the failover process and the app is back up and running! Needless to mention, Data Guard Broker has the integration hooks for interfaces such as JDBC and OCI, or even for custom apps, to ensure the application gets automatically rerouted to the new primary database after the database level failover completes. Net Net? To sum up this multi-part blog article, Data Guard with SYNC redo transport mode, plus Fast-Start Failover, gives you the ideal triple-combo - that is, it gives you the assurance that for critical outages, you can failover your Oracle databases: very fast without human intervention, and without losing any data. In short, it takes the element of risk out of critical IT operations. It does require you to be more careful with your network and systems planning, but as far as HA is concerned, the benefits outweigh the investment costs. So, this is what we in the MAA Development Team believe in. What do you think? How has your deployment experience been? We look forward to hearing from you!

    Read the article

  • The Power of Goals

    - by BuckWoody
    Every year we read blogs, articles, magazines, hear news stories and blurbs on making New Year’s Resolutions. Well, I for one don’t do that. I do something else. Each year, on January 1, my wife, daughter and I get up early - like before 6:00 A.M. - and find a breakfast place that’s open. When I used to live in Safety Harbor, Florida, that was the “Paradise Café”, which has some of the best waffles around…but I digress. We find that restaurant and have a great breakfast while everyone else is recuperating from the night before. And we bring along a worn leather book that we’ve been writing in since my daughter wasn’t even old enough to read. It’s our book of Goals. A resolution, as it is purely defined, is a decision to change, stop or start an action. It has a sense of continuance, and that’s the issue. Some people decide things like “I’m going to lose weight” or “I’m going to spend more time with my family or hobby”. But a goal is different. A goal tends to have a defined start and end point. It’s something that can be measured. So each year on January 1 we sit down with the little leather book and we make a few - and only a few - individual and family goals. Sometimes it’s to exercise three times a week at the gym, sometimes it’s to save a certain percentage of income, and sometimes it’s to give away some of our possessions or to help someone we know in a specific way. Each person is responsible for their own goals - coming up with them, and coming up with a plan to meet them. Then we write it down in the little leather book. But it doesn’t end there. Each month, we grab the little leather book and read out the goals from that year to each person with a question or two: How are you doing on your goal? And what are you doing about reaching it? Can I help? Am I helping? At the end of the year, we put a checkmark by the goals we reached, and an X by the ones we didn’t. There’s no judgment, there’s no statements, each person is just expected to handle the success or failure in their own way. We also have family goals, and those we work on together. This might seem a little “corny” to some people. “I don’t need to write goals down” they say, “I keep track in my head of the things I do all the time. That’s silly.” But let me give you a little challenge: find a book, get with your family, and write down the things you want to do by the next January 1. Each month, look at the book. You can make goals for your career, your education, your spiritual side, your family, whatever. But if you make your goals realistic, think them through, and think about how you will achieve them, you will be surprised by the power of written goals.

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Upgrade issues due to broken "dependency problems prevent configuration of linux-image-generic" error

    - by tsukune1791
    okay, I've recently upgrade from 11.10 to 12.04 and I've been having some issues. I don't know if its a bug or not, but I thought I would submit it here. Okay here's a little background; I ran the distro update from the update manager and got a couple errors that I didn't catch. the computer restarted, and when I logged the Launcher and my top bar of the Ubuntu desktop didn't load. While it was trying to load a couple error messages came up, I think they were called "apport", saying they couldn't send the bug information for some reason. I believe it said somethings wrong with my internet connection, but nothing's wrong with it. Anyway I tried running some things in terminal, namely sudo apt-get -f install sudo apt-get upgrade sudo apt-get dist-upgrade and keep getting the following errors; dustin@marceau-laptop:~$ sudo apt-get dist-upgrade [sudo] password for dustin: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up initramfs-tools (0.99ubuntu13) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-24-generic (3.2.0-24.37) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/zz-runlilo 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/kernel/postinst.d/zz-runlilo exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-24-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-24-generic; however: Package linux-image-3.2.0-24-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.24.26); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/initramfs/post-update.d//runlilo exited with return code 1 dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic linux-image-generic linux-generic initramfs-tools localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB localepurge: Disk space freed in /usr/share/doc/kde/HTML: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) And my Ubuntu desktop is still not working. I can log into Gnome and Ubuntu 2D but the Launcher, I think it's call, doesn't load. Can someone help me fix these error, or point me in the right direction to get them fixed? It is much appriciated.

    Read the article

  • Ti Launchpad

    - by raysmithequip
    Just thought I would get a couple of notes up here for reference to anyone that is interested...it is now Feb 2011 and I have not been posting here enough to remember this blog. Back in Nov 2010 I ordered the Ti launchpad msp430, it is a little target board kit replete with a mini USB cable, two very inexpensive programmable mcu's and a couple of pin headers with a couple of led's on board, a spi connector some on board jumpers and two programmable micro switches....all for less than $5.00...INCLUDING SHIPPING!!....not bad when the ardruino's are running around 20.00 for the target board, atmega328 and cable off of eBay...I wont even mention the microchip pic right now.  Naw, for $5.00 the Ti launchpad kit is about the cheapest fun around...if-uns your a geek that is... Well, the launchpad was backordered for almost two months, came like Xmas eve in fact...I had almost forgotten it!! And really, it was way late and not my idea of an Xmas present for myself.  That would of been the web expressions 4 I bought a few weeks back.  With all the holidays, I did not even look at it till last week, in fact I passed the wrapped board around at my local ham club meeting during points of personal privilege....some oh's and ahhs but mostly duhs...I actually ordered it to avoid downloading the huge code compressor studio 4 (CCS) that was supposed to be included on the cd.  No cd.  I had already downloaded IAR  another programming IDE for these little micro bugs. In my spare time I toyed with IAR and the launchpad board but after about two days of playing delete the driver with windows I decided to just download CCS 4, the code limited version, and give that a shot......CCS 4, is a good rewrite from the earlier versions, it is based on Eclipse as an IDE and includes the drivers for the msp430 target board I received in the kit.  Once installed I quickly configured the debugger for the target chip which was already plugged into the dip socket at the factory, msp430G2131 from he drop down list and clicked ok...I was in!! The CCS4 is full of bells and whistles compared to the IAR, which I would of preferred for the simplicity.  But the code compressor studio really does have it all!!..the code limited version is free, and of all things will give you java script editor box.  The whole layout in debugger mode reminds me of any modern programmer IDE...I mean sure give me Tex anytime but you simply must admire all the boxes and options included in the GUI.  It was a simple matter to check the assembly code in the flash and ram memory that came preloaded for the launchpad kit.  Assembly.  I am right now looking for my old assembly textbooks...sure I remember how to use mov and add etc but a couple of the commands are a little more than vague anymore.  Still, these little mcu's are about 50 cents each and might just work in a couple of projects I have lined up for the near future.  I may document the code here.  Luckily, I plan to write the code in c++ for the main project but if it has to be assembly, no prob.  For reference, the program that came already on the 2131 in the kit was a temperature indicator that alternately flashed red and green leds and changed the intensity of either depending on whether the temp was rising or falling...neat.  Neat enough that it might be worthwhile banging out a little GUI in windows 7 to test the new user device system calls, maybe put a temp gauge widget up on the desktop...just to keep from getting bored.  If you see some assembly code on this blog, you know I was doing something with one of the many mcu's out there.....thats all for now, more to follow...a bit later, of course.

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • EPM troubleshooting Utilities

    - by THE
    (in via Maurice) "Are you keeping up-to-date with the latest troubleshooting utilities introduced from EPM 11.1.2.2? These are typically not described in product documentation, so you might miss references to them. The following five utilities may be run from the command line.(1) Deployment Report was introduced with EPM 11.1.2.2 (11 April 2012). It details logical web addresses, web servers, application ports, database connections, user directories, database repositories configured for the EPM system, data directories used by EPM system products, instance directories, FMW homes, deployment distory, et cetera. It also helps to keep you honest about whether you made changes to the system and at what times! Download Shared Services patch 13530721 to get the backported functionality in EPM 11.1.2.1. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/epmsys_registry.sh report deployment (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\epmsys_registry.bat report deployment (on Microsoft Windows). The output is saved under \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\deployment_report.html (2) Log Analysis has received more "press". It was released with EPM 11.1.2.3 and helps the user to slice and dice EPM logs. It has many parameters which are documented when run without parameters, when run with the -h parameter, or in the 'Readme' file. It has also been released as a standalone utility for EPM 11.1.2.3 and earlier versions. (Sign in to  My Oracle Support, click the 'Patches & Updates' tab, enter the patch number 17425397, and click the Search button. Download the appropriate platform-specific zip file, unzip, and read the 'Readme' file. Note that you must provide a proper value to a JAVA_HOME environment variable [pointer to the mother directory of the Java /bin subdirectory] in the loganalysis.bat | .sh file and use the -d parameter when running standalone.) Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/loganalysis.sh -h (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\loganalysis.bat -h (on Microsoft Windows). The output is saved under the \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\ subdirectory. (3) The Registry Cleanup command may be used (without fear!) to clean up various corruptions which can  affect the Hyperion (database-based) Repository. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/registry-cleanup.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\registry-cleanup.bat (on Microsoft Windows). The actions are described on the command line. (4) The Remove Instance Command is only used if there are two or more instances configured on one computer and one of those should be deleted. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/remove-instance.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\remove-instance.bat (on Microsoft Windows). (5) The Reset Configuration Tool was introduced with EPM 11.1.2.2. It nullifies Shared Services Hyperion Registry settings so that a service may be reconfigured. You may locate the values to substitute for <product> or <task> by scanning registry.html (generated by running epmsys_registry.bat | .sh). Find productNAME in INSTANCE_TASKS_CONFIGURATION and SYSTEM_TASKS_CONFIGURATION nodes and identify tasks by property pairs that have values of 'Configurated' or 'Pending'. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/resetConfigTask.sh -product <product> -task <task> (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\resetConfigTask.bat -product <product> -task <task> (on Microsoft Windows). "Thanks to Maurice for this collection of utilities.

    Read the article

  • your AdSense account poses a risk of generating invalid activity

    - by Karington
    i received a mail from the adsense team saying: I am not an adsense expert, im actually quite new to it. I spent a lot of time on my site http://www.media1.rs, its a news aggregator with tons of options. In the meantime i discovered the double click service that had a good option to turn on google ads when you don't have any other running so i joined up for google adsense with my company account. Everything went smooth until one day (21.Jul.2011) i got an email... Hello, After reviewing our records, we've determined that your AdSense account poses a risk of generating invalid activity. Because we have a responsibility to protect our AdWords advertisers from inflated costs due to invalid activity, we've found it necessary to disable your AdSense account. Your outstanding balance and Google's share of the revenue will both be fully refunded back to the affected advertisers. Please understand that we need to take such steps to maintain the effectiveness of Google's advertising system, particularly the advertiser-publisher relationship. We understand the inconvenience that this may cause you, and we thank you in advance for your understanding and cooperation. If you have any questions or concerns about the actions we've taken, how you can appeal this decision, or invalid activity in general, you can find more information by visiting http://www.google.com/adsense/support/bin/answer.py?answer=57153. Sincerely, The Google AdSense Team At first i didn't have any idea why... but then it came to me that it was maybe the auto refresh script we had because we publish news very very often and it would be useful for visitors... but i removed it immediately after i got the mail... Then i thought it might be my friends clicking thinking that that will help me (i didn't tell them to do it and don't know if they did) or something like that but than it couldn't be that because everyone can organize 10 people and get anyone who is a start-up banned? right? Anyway i filled out the form that was on the answers page with the previously removed script and got this from them: Hello, Thank you for your appeal. We appreciate the additional information you've provided, as well as your continued interest in the AdSense program. However, after thoroughly re-reviewing your account data and taking your feedback into consideration, our specialists have confirmed that we're unable to reinstate your AdSense account. As a reminder, if you have any questions or concerns about your account, the actions we've taken, or invalid activity in general, you can find more information by visiting http://www.google.com/adsense/support/bin/answer.py?answer=57153. I do understand them that they have to keep things secret in a way but i don't know what I'm supposed to do now? Is there a check list that i can go through and re-apply? Where do i re-apply on the same form? Please help as we are a small company and cant really have a budget for hiring a specialist + don't know any also... p.s. the current ads on the site are my own through doubleclick... Thanks in advance! Best, Karington

    Read the article

  • Find Rules and Defaults using the PowerShell for SQL Server 2008 Provider

    - by BuckWoody
    I ran into an issue the other day where I couldn't set up some features in SQL Server 2008 because they ddon't support the use of Rules or Defaults. Let me explain a little more about that. In older versions of SQL Server, you could decalre a "Rule" or "Default" just like you do with a Table Constraint today. You would then "bind" these rules or defaults to the tables you wanted them to apply to. Sure, there are advantages and disadvantages to this approach, but it certainly isn't standard Data Definition Language (DDL), so they are deprecated and many features don't work with them any more. Honestly, it's been so long since I've seen them in use I had forgotten to even check for them. My suspicion is that this was a new database created with an older script. Nevertheless, the feature failed when it ran into one. Immediately I thought that I had better build some logic into my process to try and catch those - but how? Lots of choices here, but since I was using PowerShell to do the rest of the work, I thought I would investigate how easy it would be just to do it there. And using the SQL Server 2008 provider, this could not be simpler. I won't show all of the scrupt here, because I was testing for these as a condition and then bailing out of the script and sending a notification, but all it is using is the DIR command! Here's an example on my "UNIVAC" computer for the "pubs" database: Find Rules using PowerShell: dir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases\pubs\Rulesdir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases\pubs\Defaults And this one will look in all databases:  #All Databases:dir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases | select-object -property Name, Rules, Defaults Awesome. Love me some PowerShell. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.       Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Designing status management for a file processing module

    - by bot
    The background One of the functionality of a product that I am currently working on is to process a set of compressed files ( containing XML files ) that will be made available at a fixed location periodically (local or remote location - doesn't really matter for now) and dump the contents of each XML file in a database. I have taken care of the design for a generic parsing module that should be able to accommodate the parsing of any file type as I have explained in my question linked below. There is no need to take a look at the following link to answer my question but it would definitely provide a better context to the problem Generic file parser design in Java using the Strategy pattern The Goal I want to be able to keep a track of the status of each XML file and the status of each compressed file containing the XML files. I can probably have different statuses defined for the XML files such as NEW, PROCESSING, LOADING, COMPLETE or FAILED. I can derive the status of a compressed file based on the status of the XML files within the compressed file. e.g status of the compressed file is COMPLETE if no XML file inside the compressed file is in a FAILED state or status of the compressed file is FAILED if the status of at-least one XML file inside the compressed file is FAILED. A possible solution The Model I need to maintain the status of each XML file and the compressed file. I will have to define some POJOs for holding the information about an XML file as shown below. Note that there is no need to store the status of a compressed file as the status of a compressed file can be derived from the status of its XML files. public class FileInformation { private String compressedFileName; private String xmlFileName; private long lastModifiedDate; private int status; public FileInformation(final String compressedFileName, final String xmlFileName, final long lastModified, final int status) { this.compressedFileName = compressedFileName; this.xmlFileName = xmlFileName; this.lastModifiedDate = lastModified; this.status = status; } } I can then have a class called StatusManager that aggregates a Map of FileInformation instances and provides me the status of a given file at any given time in the lifetime of the appliciation as shown below : public class StatusManager { private Map<String,FileInformation> processingMap = new HashMap<String,FileInformation>(); public void add(FileInformation fileInformation) { fileInformation.setStatus(0); // 0 will indicates that the file is in NEW state. 1 will indicate that the file is in process and so on.. processingMap.put(fileInformation.getXmlFileName(),fileInformation); } public void update(String filename,int status) { FileInformation fileInformation = processingMap.get(filename); fileInformation.setStatus(status); } } That takes care of the model for the sake of explanation. So whats my question? Edited after comments from Loki and answer from Eric : - I would like to know if there are any existing design patterns that I can refer to while coming up with a design. I would also like to know how I should go about designing the status management classes. I am more interested in understanding how I can model the status management classes. I am not interested in how other components are going to be updated about a change in status at the moment as suggested by Eric.

    Read the article

  • what differs a computer scientist/software engineer to regular people who learn programming language and APIs?

    - by Amumu
    In University, we learn and reinvent the wheel a lot to truly learn the programming concepts. For example, we may learn assembly language to understand, what happens inside the box, and how the system operates, when we execute our code. This helps understanding higher level concepts deeper. For example, memory management like in C is just an abstraction of manually managed memory contents and addresses. The problem is, when we're going to work, usually productivity is required more. I could program my own containers, or string class, or date/time (using POSIX with C system call) to do the job, but then, it would take much longer time to use existing STL or Boost library, which abstract all of those thing and very easy to use. This leads to an issue, that a regular person doesn't need to get through all the low level/under the hood stuffs, who learns only one programming language and using language-related APIs. These people may eventually compete with the mainstream graduates from computer science or software engineer and call themselves programmers. At first, I don't think it's valid to call them programmers. I used to think, a real programmer needs to understand the computer deeply (but not at the electronic level). But then I changed my mind. After all, they get the job done and satisfy all the test criteria (logic, performance, security...), and in business environment, who cares if you're an expert and understand how computer works or not. You may get behind the "amateurs" if you spend to much time learning about how things work inside. It is totally valid for those people to call themselves programmers. This makes me confuse. So, after all, programming should be considered an universal skill? Does programming language and concepts matter or the problems we solve matter? For example, many C/C++ vs Java and other high level language, one of the main reason is because C/C++ features performance, as well as accessing low level facility. One of the main reason (in my opinion), is coding in C/C++ seems complex, so people feel good about it (not trolling anyone, just my observation, and my experience as well. Try to google "C hacker syndrome"). While Java on the other hand, made for simplifying programming tasks to help developers concentrate on solving their problems. Based on Java rationale, if the programing language keeps evolve, one day everyone can map their logic directly with natural language. Everyone can program. On that day, maybe real programmers are mathematicians, who could perform most complex logic (including business logic and academic logic) without worrying about installing/configuring compiler, IDEs? What's our job as a computer scientist/software engineer? To solve computer specific problems or to solve problems in general? For example, take a look at this exame: http://cm.baylor.edu/ICPCWiki/attach/Problem%20Resources/2010WorldFinalProblemSet.pdf . The example requires only basic knowledge about the programming language, but focus more on problem solving with the language. In sum, what differs a computer scientist/software engineer to regular people who learn programming language and APIs? A mathematician can be considered a programmer, if he is good enough to use programming language to implement his formula. Can we programmer do this? Probably not for most of us, since we specialize about computer, not math. An electronic engineer, who learns how to use C to program for his devices, can be considered a programmer. If the programming languages keep being simplified, may one day the software engineers, who implements business logic and create softwares, be obsolete? (Not for computer scientist though, since many of the CS topics are scientific, and science won't change, but technology will).

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • How do I cap rendering of tiles in a 2D game with SDL?

    - by farmdve
    I have some boilerplate code working, I basically have a tile based map composed of just 3 colors, and some walls and render with SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). I have pretty basic collision detection and it works, I can also detetc continuous presses, which allows me to move pretty much anywhere I want. I also have a moving camera, which follows the object. The problem is that, the tile based map is bigger than the resolution, thus not all of the map can be displayed on the screen, but it's still rendered. I would like to cap it, but since this is new to me, I pretty much have no idea. Although I cannot post all the code, as even though I am a newbie and the code pretty basic, it's already quite a few lines, I can post what I tried to do void set_camera() { //Center the camera over the dot camera.x = ( player.box.x + DOT_WIDTH / 2 ) - SCREEN_WIDTH / 2; camera.y = ( player.box.y + DOT_HEIGHT / 2 ) - SCREEN_HEIGHT / 2; //Keep the camera in bounds. if(camera.x < 0 ) { camera.x = 0; } if(camera.y < 0 ) { camera.y = 0; } if(camera.x > LEVEL_WIDTH - camera.w ) { camera.x = LEVEL_WIDTH - camera.w; } if(camera.y > LEVEL_HEIGHT - camera.h ) { camera.y = LEVEL_HEIGHT - camera.h; } } set_camera() is the function which calculates the camera position based on the player's positions. I won't pretend I know much about it. Rectangle box = {0,0,0,0}; for(int t = 0; t < TOTAL_TILES; t++) { if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) apply_surface(box.x - camera.x, box.y - camera.y, surface, screen, &clips[tiles[t]]); box.x += TILE_WIDTH; //If we've gone too far if(box.x >= LEVEL_WIDTH) { //Move back box.x = 0; //Move to the next row box.y += TILE_HEIGHT; } } This is basically my render code. The for loop loops over 192 tiles stored in an int array, each with their own unique value describing the tile type(wall or one of three possible colored tiles). box is an SDL_Rect containing the current position of the tile, which is calculated on render. TILE_HEIGHT and TILE_WIDTH are of value 80. So the cap is determined by if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) However, this is just me playing with the values and see what doesn't break it. I pretty much have no idea how to calculate it. My screen resolution is 1024/768, and the tile map is of size 1280/960.

    Read the article

  • Don't Forget To Enjoy Life

    - by Justin
    I have a pretty clear stance on posting personal information in my blogs. I tend to avoid it almost instinctively. Part of that is because I am a somewhat private person. And the other is because I know how easy it is for personal information to be gathered and collected from sources such as blogs. So, this has remained a tech only blog for me. I've only posted topics mostly related to issues I have encountered at work. In a way this blog is a 'bookmark' for me. If I post something here and run into the issue again it allows me to refer back to a convenient place where the 'fix' is documented in a way that I understand. But today, I am posting something that speaks to everyone. Something PERSONAL. Honestly, I expect this entry to receive zero views. But if nothing else, I can come back to this blog one day when I'm having a bad day or something and run across this post. And I will be reminded... DON'T FORGET TO ENJOY LIFE. Say this to yourself out loud, right now. People, we can get caught up in some rather mundane details as we trek through life. It's so easy to lose track of what really matters that it should be no surprise to find yourself reading something like this and thinking to yourself 'Yeah. You are right, man. Some of this crap I'm clinging on to right now is so small in the grand scheme of things'. I have no reservation, no shame, in saying that I am more often than not caught up in the ever evolving world of 'shit that does not matter'. When you work in technology, you are surrounded by deadlines, upgrades, new versions, support 'end of life', etc. And by time you get done with your 8 hours you go home and put in a few more because you are STILL CAUGHT UP in the things you dealt with at work all day. DO YOURSELF A FAVOR. DO YOUR FAMILY AND FRIEND A FAVOR. When you are done for the day, and you drive home, get those work-related things out of your head before you pull into the driveway. If you are still thinking on them when you park the car, leave the engine running, close your eyes and take a deep breath. If you believe in God, pray. If you don't then meditate for a second with the INTENTION of letting go of the day and becoming the 'real you'. You may have forgotten who the real you is so I'll remind you.... THE REAL YOU IS THAT GUY OR GAL THAT LAUGHS, LOVES, AND LIVES. Be the real you as often as possible. If you can't do it during your 9 - 5, do it at home. YOUR RELATIONSHIPS AND YOUR PERSONAL HAPPINESS DEPEND ON IT. I am going to make you a promise right now. If you do what I've just said, your days will be longer and your joy will be exponential. I can't explain why I know this to be true. But I do know it. And if you are there reading this right now, you know it is true too. We both know it is true because it COMES FROM WITHIN EVERY MAN, WOMAN and CHILD. We are born into love and happiness. Lets not fade away into the darkness so easily found in this world. Lets keep the flame burning. The flame of passion. Passion for LIFE. Peace be with you.

    Read the article

  • New spreadsheet accompanying SmartAssembly 6.0 provides statistics for prioritizing bug fixes

    - by Jason Crease
    One problem developers face is how to prioritize the many voices providing input into software bugs. If there is something wrong with a function that is the darling of a particular user, he or she tends to want action - now! The developer's dilemma is how to ascertain that the problem is major or minor, and when it should be addressed. Now there is a new spreadsheet accompanying SmartAssembly that provides exactly that information in an objective manner. This might upset those used to getting their way by being the loudest or pushiest, but ultimately it will ensure that the biggest problems get the priority they deserve. Here's how it works: Feature Usage Reporting (FUR) in SmartAssembly 6.0 provides a wealth of data about how your software is used by its end-users, but in the SmartAssembly UI the data isn't mined to its full extent. The new Excel spreadsheet for FUR extracts statistics from that data and presents them in easy-to-understand forms. I developed the spreadsheet feature in Microsoft Excel, using a fair amount of VBA. The spreadsheet connects directly to the database which stores the feature-usage data, and shows a wide variety of statistics and tables extracted from that data.  You want to know what percentage of users have used the 'Export as XML' button?  No problem.  How popular is v5.3 is compared to v5.1?  There's graphs for that. You need to know whether you have more users in Russia or Brazil? There's a big pie chart for that. I recently witnessed the spreadsheet in use here at Red Gate Software. My bug is exposed as minor While testing new features in .NET Reflector, I found a usability bug in the Refresh button and filed it in the Red Gate bug-tracking system. The bug was labelled "V.NEXT MINOR," which means it would be fixed in the next point release. Although I'm a professional tester, I'm not much different than most software users when they discover a bug that affects them personally: I wanted it fixed immediately. There was an ulterior motive at play here, of course. I would get to see my colleagues put the spreadsheet to work. The Reflector team loaded up the spreadsheet to view the feature-usage statistics that SmartAssembly collected for the refresh button. The resulting statistics showed that only 8% of users have ever pressed the Refresh button, and only 2.6% of sessions involve pressing the button. When Refresh is used, it's only pressed on average 1.6 times a session, with a maximum of 8 times during a session. This was in stark contrast to what I was doing as a conscientious tester: pressing it dozens of times per session. The spreadsheet provides evidence that my bug was a minor one. On to more serious things Based on the solid evidence uncovered by the spreadsheet, the Reflector team concluded that my experience does not represent that of the vast majority of Reflector's recorded users. The Reflector team had ample data to send me back to my desk and keep the bug classified as "V.NEXT MINOR." The team then went back to fixing more serious bugs. If I'm in the shoes of the user, I might not be thoroughly happy, but I cannot deny that the evidence clearly placed me in a very small minority. Next time I'm hoping the spreadsheet will prove that my bug is more important. Find out more about Feature-Usage Reporting here. The spreadsheet is available for free download here.

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Know Your Service Request Status

    - by Get Proactive Customer Adoption Team
    Untitled Document To monitor a Service Request or not to monitor a Service Request... That should never be the question Monitoring the Service Requests you create is an essential part of the process to resolve your issue when you work with a Support Engineer. If you monitor your Service Request, you know at all times where it is in the process, or to be more specific, you know at all times what action the Support Engineer has taken on your request and what the next step is. When you think about it, it is rather simple... Oracle Support is working the issue, Oracle Development is working the issue, or you are. When you check on the status, you may find that the Support Engineer has a question for you or the engineer is waiting for more information to resolve the issue. If you monitor the Service Request, and respond quickly, the process keeps moving, and you’ll get your answer more quickly. Monitoring a Service Request is easy. All you need to do is check the status codes that the Support Engineer or the system assigns to your Service Request. These status codes are not static. You will see that during the life of your Service request, it will go through a variety of status codes. The best advice I can offer you when you monitor your Service Request is to watch the codes. If the status is not changing, or if you are not getting responses back within the agreed timeframes, you should review the action plan the Support Engineer has outlined or talk about a new action plan. Here are the most common status codes: Work in Progress indicates that your Support Engineer is researching and working the issue. Development Working means that you have a code related issue and Oracle Support has submitted a bug to Development. Please pay a particular attention to the following statuses; they indicate that the Support Engineer is waiting for a response from you: Customer Working usually means that your Support Engineer needs you to collect additional information, needs you to try something or to apply a patch, or has more questions for you. Solution Offered indicates that the Support Engineer has identified the problem and has provided you with a solution. Auto-Close or Close Initiated are statuses you don’t want to see. Monitoring your Service Request helps prevent your issues from reaching these statuses. They usually indicate that the Support Engineer did not receive the requested information or action from you. This is important. If you fail to respond, the Support Engineer will attempt to contact you three times over a two-week period. If these attempts are unsuccessful, he or she will initiate the Auto-Close process. At the end of this additional two-week period, if you have not updated the Service Request, your Service Request is considered abandoned and the Support Engineer will assign a Customer Abandoned status. A Support Engineer doesn’t like to see this status, since he or she has been working to solve your issue, but we know our customers dislike it even more, since it means their issue is not moving forward. You can avoid delays in resolving your issue by monitoring your Service Request and acting quickly when you see the status change. Respond to the request from the engineer to answer questions, collect information, or to try the offered solution. Then the Support Engineer can continue working the issue and the Service Request keeps moving forward towards resolution. Keep in mind that if you take an extended period of time to respond to a request or to provide the information requested, the Support Engineer cannot take the next step. You may inadvertently send an implicit message about the problem’s urgency that may not match the Service Request priority, and your need for an answer. Help us help you. We want to get you the answer as quickly as possible so you can stay focused on your company’s objectives. Now, back to our initial question. To monitor Service Requests or not to monitor Service Requests? I think the answer is clear: yes, monitor your Service Request to resolve the issue as quickly as possible.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 7-13, 2012

    - by Bob Rhubart
    The Top 10 items shared via the OTN ArchBeat Facebook page for the week of October 7-13, 2012. OOW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Adaptive ADF/WebCenter template for the iPad | Maiko Rocha Oracle Fusion Middleware A-Team member Maiko Rocha responds to a a customer request for information about how to create an adaptive iPad template for their WebCenter Portal application, "a specific template to streamline their workflow on the iPad." Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. WebCenter Sites Gadget Development Concepts Quickstart | John Brunswick What are Gadgets? "At their most basic level they can be thought of as lightweight portlets that run largely on the client side of an architecture," says John Brunswick. "Gadgets provide a cross-platform container to run reusable UI modules that generally expose dynamic information to an end user, allowing for some level of end user customization." Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call Academies. "These indexes contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Fusion Applications Technical Tips | Naveen Nahata "Setting memory parameters for Admin and Managed servers of various domains in Fusion Applications can be, let us say, a little daunting," says Oracle Fusion Middleware A-Team member Naveen Nahata. "While all this may look complicated and intimidating, it is actually relatively simple once you understand how it all works." Updated Agenda for OTN Architect Day Los Angeles (Oct 25) In less than two weeks Oracle Architect Day rolls into Los Angeles, with a full slate of sessions devoted to cloud computing, engineered systems, and SOA. Follow the link for the updated event agenda. ORCLville: OOW 2012 - A Not So Brief Recap Oracle ACE Director Floyd Teter, an Applications & Apps Technology specialists, shares his personal, frank, and and extensive recap or Oracle OpenWorld 2012. SOA Suite create partition in Enterprise Manager | Peter Paul van de Beek "In Oracle SOA Suite 10g, or more specific BPEL 10g, one could group functionality in domains," says Peter Paul van de Beek. "This feature has been away in the early versions of SOA Suite 11g. They have returned in more recent version and can be used for all SCA composites (instead of BPEL only). Nowadays these 10g domains are called partitions." Thought for the Day "I strive for an architecture from which nothing can be taken away." — Helmut Jahn Source: BrainyQuote.com

    Read the article

  • SQLAuthority News – Live Virtual Classroom New Trend in Technology

    - by nupurdave
    This blog post is by Nupur Dave, who is housewife and works from home. Changing times and a super busy lifestyle have rendered most of us powerless when it comes to doing what we love to do. I feel that a man never ceases to learn and his sole aim is to seek knowledge, and keep growing. However, our tight schedules and packed calendars mean that we really have to struggle to take some time out and follow the path towards learning. Like all working professionals with a family to take care of, I hardly found time to pursue my interests. However, it was getting increasingly important for me to upgrade my skills, not only for my personal quest for knowledge but to also substantiate my professional standing. When I came to know about Koenig Live Virtual Classroom from friends, it piqued my interest. I felt like it was the answer to all my concerns. Without wasting a single minute, I contacted Koenig for a demo class. Here are some of the highlights of Koenig LVC which instantly struck a chord in me: Online Training – Koenig offers 1-on-1 Online Training with the instructor at the other end. Doesn’t matter where I am sitting, in my office or at home, I can connect to my trainer from anywhere. Flexible Timings – The most comfortable part is you get to choose the time that suits you best. Economical -  No need to travel a thousand miles, the experts are right here on your computer screen. So no extra cost of travel, lodging and meals. 24X7 Lab Access: This is again a great feature that proved to be very beneficial in gaining a practical understanding of the subject. Powered by a data center, this facility offers students much to look forward to. 300+ Full Time Certified Experts: Be assured that you are learning from the best people in the industry. Customized Courses: Course material and training delivery is completely customized to suit your specific requirements. Official Courseware: The instructor teaches from official courseware of the vendor, depending on which course you have applied for – be it Microsoft, Cisco, Oracle or any other certification. Take Exam from Anywhere: Post completion of your IT training, you can take your certification exam from anywhere. Again, no need to travel a thousand miles to earn certified status. No Pre-Recorded Sessions: For those who still need clarification, it will be a live online classroom with trainers instructing you in real time. So you won’t get any surprises of getting pre-recorded sessions in place of your live instructor. Koenig’s Live Virtual Classroom methodology greatly exceeded my expectations. The instructor was highly skilled and very professional. I had concerns about the quality of AV on the computer screen, and whether I’ll be able to understand each topic in detail. However, the quality of video and sound, and the learning methodology used was impeccable. If you’re also facing time crunch and other commitment issues which are getting in the way of your professional development, LVC is the best solution to learn and grow. To know more about Student Experiences and Feedback of Koenig LVC, you can view their Testimonials. Reference: Nupur Dave (http://blog.sqlauthority.com)Filed under: SQL Authority

    Read the article

  • Accounts in Work Items after migration to TFS 2010 and to new domain

    - by Clara Oscura
    Lately I’ve been doing some tests on migrating our TFS 2008 installation to TFS 2010, coupled with a machine and domain change. One particular topic that was tricky is user accounts. We installed first a new machine with TFS 2010 and then migrated the projects in the old server. The work items were migrated with the projects. Great, but if I try to edit one of the old work items I cannot save it anymore because some fields contain old user names (ex. OLDDOMAIN\user) which are not known in the new domain (it should be NEWDOMAIN\user). The errors look like this: When I correct the ‘Assigned To’ field value, I get another error regarding another field: Before TFS 2010, we had TFSUsers power tool. It allow you to map an old user name to a new user name. This is not available anymore because WI fields with user accounts are now synchronized with AD display names changes (explained here). The correct way to go about this in TFS 2010 is to use TFSConfig Identities before adding the new domain accounts into the TFS groups (documented here). So, too late for us. I’ve found a (tedious) workaround to change those old account in work items in order to allow people to keep working with them. 1. Install TFS 2010 power tools 2. Export WIT from your project (VS | Tools | Process Editor | Work Item Types). Save the definition, for example: Original_MyProject_Task.xml 3. Copy the xml (NoReadOnly_MyProject_Task.xml) and edit it. From the field definition of ‘Activated By’, ‘Closed By’ and ‘Resolved By’, remove the following:        <WHENNOTCHANGED field="System.State">           <READONLY />         </WHENNOTCHANGED> 4. Import WIT in VS. Choose the new file (NoReadOnly_MyProject_Task.xml) and import it in MyProject 5. Open all tasks in Excel (flat list). Display the following columns: Asssigned To Activated By Closed By Resolved By Change the user accounts to the new ones (I usually sort each column alphabetically to make it easier). 6. Publish. If you get a conflict on a field, tough luck. You will have to manually choose “Local version” for each work item. I told you it was a tedious process. 7. Import original WIT (Original_MyProject_Task.xml) in MyProject. We only changed the WI definition so that we could change some fields. The original definition should be put back. And what about these other fields? Created By Authorized As These fields are not editable by definition (VS | Tools | Process Editor | Work Item Fields Explorer), even if they are not marked as read-only in the WIT. You can leave the old values. It doesn’t seem to matter to TFS. The other four fields are editable by definition, so only the WIT readonly rule prevents us from changing them. Technorati Tags: TFS,Team Foundation Server 2010,Work Item,Domain change

    Read the article

  • "previousMode": Controling the Pin Action of a TopComponent

    - by Geertjan
    An excellent thing I learned today is that you, as a developer of a NetBeans module or NetBeans Platform application, can control the pin button. Up until today, whenever I had a TopComponent defined to appear in "rightSlidingSide" mode and then I clicked the "pin" button, as shown here... ...the TopComponent would then find itself pinned in the "explorer" mode. Would make more sense if it would be pinned in the "properties" mode, which is the docked mode closest to the "rightSlidingSide" mode. Not being able to control the "pin" button has been a recurring question (including in my own head) over several years. But the NetBeans Team's window system guru Stan Aubrecht informed me today that a "previousMode" attribute exists in the "tc-ref" file of the TopComponent. Since a few releases, that file is generated via the annotations in the TopComponent. However, "previousMode" is currently not one of the attributes exposed by the @TopComponent.Registration annotation. Therefore, what I did was this: Set "rightSlidingSide" in the "mode" attribute of the @TopComponent.Registration. Build the module. Find the "generated-layer.xml" (in the Files window) and move the layer registration of the TopComponent, including its action and menu item for opening the TopComponent, into my own manual layer within the module. Then remove all the TopComponent annotations from the TopComponent, though you can keep @ConvertAsProperties and @Messages. Then add the "previousMode" attribute, as highlighted below, into my own layer file, i.e., within the tags copied from the "generated-layer.xml": <folder name="Modes"> <folder name="rightSlidingSide"> <file name="ComparatorTopComponent.wstcref"> <![CDATA[<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE tc-ref PUBLIC "-//NetBeans//DTD Top Component in Mode Properties 2.0//EN" "http://www.netbeans.org/dtds/tc-ref2_0.dtd"> <tc-ref version="2.0"> <tc-id id="ComparatorTopComponent"/> <state opened="false"/> <previousMode name="properties" index="0" /> </tc-ref> ]]> </file> </folder> </folder> Now when you run the application and pin the specific TopComponent defined above, i.e., in the above case, named "ComparatorTopComponent", you will find it is pinned into the "properties" mode! That's pretty cool and if you agree, then you're a pretty cool NetBeans Platform developer, and I'd love to find out more about the application/s you're creating on the NetBeans Platform! Meanwhile, I'm going to create an issue for exposing the "previousMode" attribute in the @TopComponent.Registration annotation.

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >