Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 457/679 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • The remote server returned an error: 227 Entering Passive Mode

    - by hmloo
    Today while uploading file to FTP sever, the codes throw an error - "The remote server returned an error: 227 Entering Passive Mode", after research, I got some knowledge in FTP working principle. FTP may run in active or passive mode, which determines how the data connection is established. Active mode: command connection: client >1024  -> server 21 data connection:    client >1024  <-  server 20 passive mode: command connection: client > 1024 -> server 21 data connection:    client > 1024 <- server > 1024 In active mode, the client connects from a random unprivileged port (N > 1023) to the FTP server's command port(default port 21). If the client needs to transfer data, the client will use PORT command to tell the server:"hi, I opened port XXXX, please connect to me." and then server will use port 20 to initiate the data connection to that client port number. In passive mode, the client connects from a random unprivileged port (N > 1023) to the FTP server's command port(default port 21). If the client needs to transfer data, the sever will tell the client:"hi, I opened port XXXX , please connect to me." and then client will initiate the data connection to that sever port number. In a nutshell, active mode is used to have the server connect to the client, and passive mode is used to have the client connect to the server. So if your FTP server is configured to work in active mode only or the firewalls between your client and the server are blocking the data port range, then you will get error message, to fix this issue, just set System.Net.FtpWebRequest property UsePassive = false. Hope this helps! Thanks for reading!

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • SQL Server MCM Changes and Readiness Videos

    - by Enrique Lima
    Towards the end of 2010, Microsoft made some changes to the Microsoft Certified Master for SQL Server 2008 program. The process to certification required to have a 3 week bootcamp/course in Redmond. This has changed now.  It has been mapped to 2 exams. Get information from Microsoft Learning with regards to the changes, process, resources and pricing for the certification exams.  http://www.microsoft.com/learning/en/us/certification/master-sql-path.aspx What has happened here too, is some SQL MCM rotation Instructors and SQL MCMs  have created materials to prep for those exams.  I see this as a huge benefit for individuals who are planning to take on the MCM, but really it is of huge benefit for all individuals who deal with working around SQL Server on a regular basis. Check the Readiness Videos as a great starting point http://technet.microsoft.com/en-us/sqlserver/ff977043.aspx

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Kathryn Perry
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world.The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3DMeg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP."The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions."Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform."With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps.Continue reading the full article here.

    Read the article

  • Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)

    - by arungupta
    Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop. public class TestServlet extends HttpServlet {    protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws IOException, ServletException {     ServletInputStream input = request.getInputStream();       byte[] b = new byte[1024];       int len = -1;       while ((len = input.read(b)) != -1) {          . . .        }   }} If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream. This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners - ReadListener and WriteListener interfaces. These are then registered using ServletInputStream.setReadListener and ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking. The updated doGet in our case will look like: AsyncContext context = request.startAsync();ServletInputStream input = request.getInputStream();input.setReadListener(new MyReadListener(input, context)); Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O. At most one ReadListener can be registered on ServletIntputStream and similarly at most one WriteListener can be registered on ServletOutputStream. ServletInputStream.isReady and ServletInputStream.isFinished are new methods to check the status of non-blocking I/O read. ServletOutputStream.canWrite is a new method to check if data can be written without blocking.  MyReadListener implementation looks like: @Overridepublic void onDataAvailable() { try { StringBuilder sb = new StringBuilder(); int len = -1; byte b[] = new byte[1024]; while (input.isReady() && (len = input.read(b)) != -1) { String data = new String(b, 0, len); System.out.println("--> " + data); } } catch (IOException ex) { Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex); }}@Overridepublic void onAllDataRead() { System.out.println("onAllDataRead"); context.complete();}@Overridepublic void onError(Throwable t) { t.printStackTrace(); context.complete();} This implementation has three callbacks: onDataAvailable callback method is called whenever data can be read without blocking onAllDataRead callback method is invoked data for the current request is completely read. onError callback is invoked if there is an error processing the request. Notice, context.complete() is called in onAllDataRead and onError to signal the completion of data read. For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only. The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards. The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here. Here are some more references for you: Java EE 7 Specification Status Servlet Specification Project JSR Expert Group Discussion Archive Servlet 3.1 Javadocs

    Read the article

  • invite: Oracle Fusion Applications Partner Update Webcast

    - by mseika
    Oracle Fusion Applications: Thursday's Partner UpdatesIn order to keep you up to date with partner-specific news and information regarding Oracle Fusion Applications, we are expanding our Fusion Applications Webcast Series to include these additional Thursday sessions.All sessions will be recorded and replays will be posted to this Oracle PartnerNetwork page.Please mark your calendar for these NEW Fusion Partner Update specific sessions: Click Here for logistics and dial-in details for each webcast. 11/29/12 Win Cloud SFA with Fusion CRM: Sales Positioning 12/6/12 Win Cloud SFA with Fusion CRM: Fusion CRM against SFDC 12/13/12 Implementing Fusion Applications: ERP Cloud Services, Back Office Solutions that Keep You in Front 12/20/12 Understanding Fusion Supply Chain Management (SCM) Opportunities PLEASE NOTE: This webcast series is for Oracle Partners and Oracle Employees ONLY.

    Read the article

  • Perspective Is Everything

    - by juanlarios
    Sitting on a window seat on my way back from Seattle I looked out the window and saw the large body of water. I was reminded of childhood memories of running as hard as I could through burning hot sand with the anticipation of the splash of the ocean. Looking out the window the water appeared like a sheet draped over land. I couldn’t help but ponder how perspective changes everything.  Over the last several days I had a chance to attend the MVP Summit in Redmond. I had a great time with fellow MVP’s and the SharePoint Product Group. Although I can’t say much about what was discussed and what is coming in the future, I want to share some realizations I had while experiencing the MVP summit.  The SharePoint Product is ever-improving, full of innovation but also a reactionary embodiment of MVP, client and market feedback. There are several features that come to mind that clients complain about where I have felt helpless in informing them that the features are not as mature as they would like it. Together, we figure out a way to make it work and deal with the limitations. It became clear that there are features that have taken a different purpose in the market place from the original vision. The SP Product group is working hard to react to these changes in vision and make SharePoint better for real life implementations.  It is easy to think that SharePoint should be all things to all people. In reality there are products that are very detailed in specific composites, they do this one thing well but severely lack in other areas.  Its easy sometimes to say, “What was Microsoft thinking with this feature?” the Product group is doing all they can to make the moving pieces better and dealing with challenges with having all of them work together.  Sometimes the features don’t fully embody the vision because of the many challenges, but trust me when I say the product group is really focused on delivery and innovation.  As I was speaking with a fellow MVP throughout the session, we spoke about the iPad 2(ironically announced this past week during the MVP summit) and Microsoft’s possible product answer; I realized the days of reactionary products from MS is over. There are many users that will remember Vista and the painful execution in that product, but there has been a lot of success in Windows 7. There was no rush for a reactionary answer to the Nintendo Wii, as a result a ground breaking and game changing product was brought to market, the XBOX –Kinect! I can’t say much here, but it’s safe to say, expect innovation, and execution of products and technology that will change the market instead of react to them!       There are many things I learned and I would love to share that have to do with perspective, technology, etc… but this is far as I can go in details. This might not be new to you or specifically the message that was shared during the summit. These are just my impressions of the event and the spirit of future vision. Great things ahead!

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • Registrati Subito!

    - by Claudia Caramelli-Oracle
    Lo sapevi che i regolamenti italiani limitano le aziende nell'invio di comunicazioni via e-mail senza il tuo esplicito consenso? Iscrivendoti alle comunicazioni Oracle, potrai solo ottenere benefici! Eccoti un paio di esempi:Mantieni la tua conoscenza di Oracle sempre al top:• Rimani aggiornato sulle tecnologie Oracle con le ultime informazioni e gli annunci sui nostri prodotti e servizi • Rimani aggiornato con regolari best practice di settore e report degli analisti • Ascolta direttamente il nostro management• Ricevi inviti ad eventi locali, dove ti sarà possibile incontrare specialisti Oracle e potrai ampliare la tua rete con altri clienti Controlla i tipi di informazioni che si ricevono • Gestisci i tipi di contenuti che vuoi ricevere sottoscrivendo gli argomenti basati sul ruolo, sull'industria o sul prodotto che ti interessano • Oppure potrai sempre scegliere di disiscriverti in qualsiasi momento con il nostro "one-click unsubscribe"Registrati subito per avere il tuo account Oracle qui: https://profile.oracle.com/

    Read the article

  • SQL Azure Security: DoS Part II

    - by Herve Roggero
    Ah!  When you shoot yourself in the foot... a few times... it hurts! That's what I did on Sunday, to learn more about the behavior of the SQL Azure Denial Of Service prevention feature. This article is a short follow up to my last post on this feature. In this post, I will outline some of the lessons learned that were the result of testing the behavior of SQL Azure from two machines. From the standpoint of SQL Azure, they look like one machine since they are behind a NAT. All logins affected The first thing to note is that all the logins are affected. If you lock yourself out to a specific database, none of the logins will work on that database. In fact the database size becomes "--" in the SQL Azure Portal.   Less than 100 sessions I was able to see 50+ sessions being made in SQL Azure (by looking at sys.dm_exec_sessions) before being locked out. The the DoS feature appears to be triggered in part by the number of open sessions. I could not determine if the lockout is triggered by the speed at which connection requests are made however.   Other Databases Unaffected This was interesting... the DoS feature works at the database level. Other databases were available for me to use.   Just Wait Initially I thought that going through SQL Azure and connecting from there would reset the database and allow me to connect again. Unfortunately this doesn't seem to be the case. You will have to wait. And the more you lock yourself out, the more you will have to wait... The first time the database became available again within 30 seconds or so; the second time within 2-3 minutes and the third time... within 2-3 hours...   Successful Logins The DoS feature appears to engage only for valid logins. If you have a login failure, it doesn't seem to count. I ran a test with over 100 login failures without being locked.

    Read the article

  • Stuxnet - how it infects

    - by Kit Ong
    Except from the CNET article.http://news.cnet.com/8301-13772_3-57413329-52/stuxnet-delivered-to-iranian-nuclear-plant-on-thumb-drive/?part=propeller&subj=news&tag=linkvThe Stuxnet worm propagates by exploiting a hole in all versions of Windows in the code that processes shortcut files, ending in ".lnk," according to...[the] Microsoft Malware Protection Center....Merely browsing to the removable media drive using an application that displays shortcut icons, such as Windows Explorer, will run the malware without the user clicking on the icons. The worm infects USB drives or other removable storage devices that are subsequently connected to the infected machine. Those USB drives then infect other machines much like the common cold is spread by infected people sneezing into their hands and then touching door knobs that others are handling.The malware includes a rootkit, which is software designed to hide the fact that a computer has been compromised, and other software that sneaks onto computers by using a digital certificates signed two Taiwanese chip manufacturers that are based in the same industrial complex in Taiwan--RealTek and JMicron, according to Chester Wisniewski, senior security advisor at Sophos.... It is unclear how the digital signatures were acquired by the attacker, but experts believe they were stolen and that the companies were not involved.Once the machine is infected, a Trojan looks to see if the computer it lands on is running Siemens' Simatic WinCC software. The malware then automatically uses a default password that is hard-coded into the software to access the control system's Microsoft SQL database. The Stuxnet worm propagates by exploiting a hole in all versions of Windows in the code that processes shortcut files, ending in ".lnk," according to...[the] Microsoft Malware Protection Center....Merely browsing to the removable media drive using an application that displays shortcut icons, such as Windows Explorer, will run the malware without the user clicking on the icons. The worm infects USB drives or other removable storage devices that are subsequently connected to the infected machine. Those USB drives then infect other machines much like the common cold is spread by infected people sneezing into their hands and then touching door knobs that others are handling.The malware includes a rootkit, which is software designed to hide the fact that a computer has been compromised, and other software that sneaks onto computers by using a digital certificates signed two Taiwanese chip manufacturers that are based in the same industrial complex in Taiwan--RealTek and JMicron, according to Chester Wisniewski, senior security advisor at Sophos.... It is unclear how the digital signatures were acquired by the attacker, but experts believe they were stolen and that the companies were not involved.Once the machine is infected, a Trojan looks to see if the computer it lands on is running Siemens' Simatic WinCC software. The malware then automatically uses a default password that is hard-coded into the software to access the control system's Microsoft SQL database.

    Read the article

  • NoSQL

    - by NoReasoning
    Last night, (Tuesday, June 28), at the KC .NET User group meeting, George Westwater gave a terrific presentation on NoSQL. The best way to define it (the best way is to see George explain it, and he says he will record his presentation and make it available through his blog – link above)  is databases  that does not use relational technology. And his point, and this is true – I have been around awhile – is that non-relational databases have been used for over 50 years in the business. He points out that Wall Street firms have been using non-relational technology ever since they started using computers. IBM still fully supports IMS, now in version 11 (12 is in beta), because these firms are still using this product and will continue to do so for a long time. Of course, like a lot of computer business technology, there are a lot of new NoSQL products available these days, simply as a reaction to the problems of scaling relational databases for internet use. As a result, it almost looks as though NoSQL is something new. And there are a lot, I mean a LOT, I mean a L-O-T , of new products out there for this technology. The best resource to cover all of these products is http://nosql-database.org/, which has a huge listing of what is available. My interest in the subject is primarily due to my interest in Windows Azure and the fact that Windows Azure storage is all non-relational, even the table storage. It is very fascinating and most of all, far cheaper than using SQL Azure for storage in the “cloud."

    Read the article

  • ADF Faces Skin Editor - How to Work with It

    - by Shay Shmeltzer
    The ODTUG Kscop11 conference was a great success with lots of sessions about FMW running in a special track. I did several sessions and labs in the conference, and I thought it might be a good idea to at least give you a taste of what you might have missed. So here is most of what I demoed in my ADF Faces Skinning session (not all though - that session was 60 minutes long, and while everyone did end up going out of the building in the middle because of a fire drill for about 5 minutes, there was other things covered in the session as well). In the demo here you'll see how to generate new images and default color scheme, how to identify a component class with Firebug, how to skin a component, how to identify the global selector of a property, how to change fonts and how to change strings. By the way, for more on ADF Skinning you should also listen to the ADF Insider seminar that Frank Nimphius recorded on skinning, it will give you better understanding of the overall skinning process. P.S. in the demo I add an entry to the web.xml file which prevent ADF Faces from compressing the HTML that is generated. The entry is for org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION  and I set it to true. This is very useful when you work on creating the skin, but don't forget to un-set it before you go production.

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • When to use each user research method

    - by user12277104
    There are a lot of user research methods out there, but sometimes we get stuck in a rut, conducting all formative usability testing before coding, or running surveys to gather satisfaction data. I'll be the first to admit that it happens to me, but to get out of a rut, it just takes a minute to look at where I am in the design & development cycle, what kind(s) of data I need, and what methods are available to me. We need reminders, or refreshers, every once in a while. One tool I've found useful is a graphic organizer that I created many years ago. It's been through several revisions, as I've adapted it to the product cycles of the places I've worked, changed my mind about how to categorize it, and added methods that I've used or created over time. I shared a version of this table at the 2012 International UPA conference, and I was contacted by someone yesterday who wanted to use it in a university course on user-center design. I was flattered at the the thought, but embarrassed, because I was sure it needed updating -- that was a year ago, after all. But I opened it today, and really, there's not much I'd change -- sure, I could add some nuance regarding what types of formative testing, such as modality (remote, unmoderated remote, or in-person) or flavor of testing (RITE, RITE-Krug, comparative, performance), but I think it's pretty much ok as is. Click on the image below, to get the full-size PDF. And whether it's entirely "right" or "wrong" isn't the whole value of looking at these methods across the product lifecycle. The real value lies in the reminder that I have options. And what those options are change as the field changes, so while I don't expect this graphic to have an eternal shelf life, it's still ok a year after I last updated it. That said, if you find something missing or out of place, let me know :) 

    Read the article

  • Oracle Enterprise Manager 11g Launch at 1pm in New York

    - by john.brust
    If you're not in New York for the launch of Oracle Enterprise Manager 11g, you're still invited to join us for our live launch webcast starting shortly. Register now! Speakers include: Charles Phillips | President, Oracle Richard Sarwal | Senior Vice President, Product Development Perry M. Cozzone | Vice President and CIO, Colorcon, Inc J.P. Garbani | Vice President, Forrester Research Photo courtesy of our Oracle Database Insider team member: Jeff Erickson

    Read the article

  • Lambda&rsquo;s for .NET made easy&hellip;

    - by mbcrump
    The purpose of my blog is to explain things for a beginner to intermediate c# programmer. I’ve seen several blog post that use lambda expressions always assuming the audience is familiar with them. The purpose of this post is to make them simple and easily understood. Let’s begin with a definition. A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types. So anonymous function… delegates or expression tree types? I don’t get it??? Confused yet?   Lets break this into a few definitions and jump right into the code. anonymous function – is an "inline" statement or expression that can be used wherever a delegate type is expected. delegate - is a type that references a method. Once a delegate is assigned a method, it behaves exactly like that method. The delegate method can be used like any other method, with parameters and a return value. Expression trees - represent code in a tree-like data structure, where each node is an expression, for example, a method call or a binary operation such as x < y.   Don’t worry if this still sounds confusing, lets jump right into the code with a simple 3 line program. We are going to use a Function Delegate (all you need to remember is that this delegate returns a value.) Lambda expressions are used most commonly with the Func and Action delegates, so you will see an example of both of these. Lambda Expression 3 lines. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Func<int, int> myfunc = x => x *x;             Console.WriteLine(myfunc(6).ToString());             Console.ReadLine();         }       } } Is equivalent to Old way of doing it. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {               Console.WriteLine(myFunc(6).ToString());             Console.ReadLine();         }            static int myFunc(int x)          {              return x * x;            }       } } In the example, there is a single parameter, x, and the expression is x*x. I’m going to stop here to make sure you are still with me. A lambda expression is an unnamed method written in place of a delegate instance. In other words, the compiler converts the lambda expression to either a : A delegate instance An expression tree All lambda have the following form: (parameters) => expression or statement block Now look back to the ones we have created. It should start to sink in. Don’t get stuck on the => form, use it as an identifier of a lambda. A Lamba expression can also be written in the following form: Lambda Expression. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Func<int, int> myFunc = x =>             {                 return x * x;             };               Console.WriteLine(myFunc(6).ToString());             Console.ReadLine();         }       } } This form may be easier to read but consumes more space. Lets try an Action delegate – this delegate does not return a value. Action Delegate example. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Action<string> myAction = (string x) => { Console.WriteLine(x); };             myAction("michael has made this so easy");                                   Console.ReadLine();         }       } } Lambdas can also capture outer variables (such as the example below) A lambda expression can reference the local variables and parameters of the method in which it’s defined. Outer variables referenced by a lambda expression are called captured variables. Capturing Outer Variables using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             string mike = "Michael";             Action<string> myAction = (string x) => {                 Console.WriteLine("{0}{1}", mike, x);          };             myAction(" has made this so easy");                                   Console.ReadLine();         }       } } Lamba’s can also with a strongly typed list to loop through a collection.   Used w a strongly typed list. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             List<string> list = new List<string>() { "1", "2", "3", "4" };             list.ForEach(s => Console.WriteLine(s));             Console.ReadLine();         }       } } Outputs: 1 2 3 4 I think this will get you started with Lambda’s, as always consult the MSDN documentation for more information. Still confused? Hopefully you are not.

    Read the article

  • Reminder: For a Complete View Of Your Concurrent Processing Take A Look At The CP Analyzer!

    - by LuciaC
    For a complete view of your Concurrent Processing take a look at the CP Analyzer!  Doc ID 1411723.1 has the script to download and a 9 min video. The Concurrent Processing Analyzer is a Self-Service Health-Check script which reviews the overall Concurrent Processing Footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices.This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome. E-Business Applications Concurrent Processing Analyzer Overview E-Business Applications Concurrent Request Analysis E-Business Applications Concurrent Manager Analysis Identifies Concurrent System Setup and configurations Identifies and recommends Concurrent Best Practices Easy to add Tool for regular Concurrent Maintenance Execute Analysis anytime to compare trending from past outputs Feedback welcome!

    Read the article

  • Account Listings on APress and O'Reilly

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/20/e-books-from-apress-and-oreilly.aspxIn recent days both APress and O'Reilly have radically improved the way they display items registered against your account with them.APress now show only one line per e-book and the multitude of formats is handled by a drop-down list. The result is that the list of APress books I have bought now requires less paging through. The only things that the APress lacks are:The ability to show all on the page (currently options for 10,20 and 50 per page)The ability to sort on title or date bought or date updatedO'Reilly have always shown the formats available by a series of hyperlinks along one line. They have improved their list as follows:You can sort on title or date bought or date updatedClicking on a line shows full detail of the item (include image, download details, errata link and catalog page). Clicking again collapses the detail.You can select all your purchased items together or just show E-Books or Print or VideosNow why is the date updated important? Updates are issued for various books  (particularly those made available whilst still being written) - the publishers very kindly email you when an update is available but finding it in the list to download it again is not as easy as you think, however sort on release date and they are easy to find!

    Read the article

  • Microsoft Offloaded Data Transfer (ODX)

    - by Charles Cline
    For all you admins and other technical people out there who have watched the Windows OS spool the data from network storage to your workstation and then back to network storage, watch for Offloaded Data Transfer (ODX).  I saw ODX at TechEd a few weeks ago and the data movement is primarily kept at the backend storage network.  EMC and other storage vendors are already posting about when they will have this functionality.Here's some information about it:http://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspxhttp://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspx

    Read the article

  • Oops, I left my kernel zone configuration behind!

    - by mgerdts
    Most people use boot environments to move in one direction.  A system starts with an initial installation and from time to time new boot environments are created - typically as a result of pkg update - and then the new BE is booted.  This post is of little interest to those people as no hackery is needed.  This post is about some mild hackery. During development, I commonly test different scenarios across multiple boot environments.  Many times, those tests aren't related to the act of configuring or installing zone and I so it's kinda handy to avoid the effort involved of zone configuration and installation.  A somewhat common order of operations is like the following: # beadm create -e golden -a test1 # reboot Once the system is running in the test1 BE, I install a kernel zone. # zonecfg -z a178 create -t SYSsolaris-kz # zoneadm -z a178 install Time passes, and I do all kinds of stuff to the test1 boot environment and want to test other scenarios in a clean boot environment.  So then I create a new one from my golden BE and reboot into it. # beadm create -e golden -a test2 # reboot Since the test2 BE was created from the golden BE, it doesn't have the configuration for the kernel zone that I configured and installed.  Getting that zone over to the test2 BE is pretty easy.  My test1 BE is really known as s11fixes-2. root@vzl-212:~# beadm mount s11fixes-2 /mnt root@vzl-212:~# zonecfg -R /mnt -z a178 export | zonecfg -z a178 -f - root@vzl-212:~# beadm unmount s11fixes-2 root@vzl-212:~# zoneadm -z a178 attach root@vzl-212:~# zoneadm -z a178 boot On the face of it, it would seem as though it would have been easier to just use zonecfg -z a178 create -t SYSolaris-kz within the test2 BE to get the new configuration over.  That would almost work, but it would have left behind the encryption key required for access to host data and any suspend image.  See solaris-kz(5) for more info on host data.  I very commonly have more complex configurations that contain many storage URIs and non-default resource controls.  Retyping them would be rather tedious.

    Read the article

  • Critical Patch Updates During EBS 11i Exception to Sustaining Support Period

    - by Elke Phelps (Oracle Development)
    As previously blogged in the EBS 11i and 12.1 Support Timeline Changes entry, two important changes to the Oracle Lifetime Support policies were announced at Oracle OpenWorld 2012 - San Francisco.  These changes affect E-Business Suite Releases 11i and 12.1. Critical Patch Updates for EBS 11i during the Exception to Sustaining Support Period You may be wondering about the availability of Critical Patch Updates (CPU) for EBS 11i during the Exception to Sustaining Support period.  The following details the E-Business Suite Critical Patch Update support policy for EBS 11i during the Exception to Sustaining Support period: Oracle will continue to provide CPUs containing critical security fixes for E-Business Suite 11i.  CPUs will be packaged and released as as cumulative patches for both ATG RUP 6 and ATG RUP 7. As always, we try to minimize the number of patches and dependencies required for uptake of a CPU; however, there have been quite a few changes to the 11i baseline since its release.  For dependency reasons the 11i CPUs may require a higher number of files in order to bring them up to a consistent, stable, and well tested level. EBS 11i customer will continue to receive CPUs up to and including the October 2014 CPU. Where can I learn more? There are two interlocking policies that affect the E-Business Suite:  Oracle's Lifetime Support policies for each EBS release (timelines which were updated by this announcement), and the Error Correction Support policies (which state the minimum baselines for new patches). For more information about how these policies interact, see: Understanding Support Windows for E-Business Suite Releases What about E-Business Suite technology stack components? Things get more complicated when one considers individual techstack components such as Oracle Forms or the Oracle Database.  To learn more about the interlocking EBS+techstack component support windows, see these two articles: On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support: A Primer for E-Business Suite Users Where can I learn more about Critical Patch Updates?The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents.  Related Articles EBS 11i and 12.1 Support Timeline Changes Frequently Asked Questions about Latest EBS Support Changes Extended Support Fees Waived for E-Business Suite 11i and 12.0

    Read the article

  • My JavaOne 2012

    - by Geertjan
    I received a JavaOne speaker invitation for the following sessions and BOFs. Only one involves me on my own: Session ID: CON2987Session Title: Unlocking the Java EE 6 Platform The rest are combo packages, i.e., you get multiple speakers for the price of one.  Sessions and BOFs together with others:  Session ID: BOF4227 (together with Zoran Sevarac)Session Title: Building Smart Java Applications with Neural Networks, Using the Neuroph Framework Session ID: BOF5806 (together with Manfred Riem)Session Title: Doing JSF Development in NetBeans 7.1 Session ID: CON3160 (together with Allan Gregersen and others)Session Title: Dynamic Class Reloading in the Wild with Javeleon Discussion Panels:  Session ID: CON4952 (together with several NetBeans Platform developers)Session Title: NetBeans Platform Panel Discussion Session ID: CON6139 (together with several NetBeans IDE users)Session Title: Lessons Learned in Building Enterprise and Desktop Applications with the NetBeans IDE

    Read the article

  • Raspberry Pi + Azure + Mobile App

    - by Richard Jones
    Ongoing project idea. So this is of long running personal interest to build a Mobile App that shows you a push notification/pop up alert, when anyone calls your house phone. So I've taken delivery of a Raspberry Pi. I've ordered a new Crucible Technology Caller ID Box. (arriving soon). I have been writing/learning Python to implement the Listener software. This will in turn push xml messages up to Azure for final delivery via push notifications to an App. iOS app already written to receive the notifications/allow address book additions made up from phone numbers from incoming calls. So this is fusion, R-Pi, Azure, Hardware and iOS. Details to follow as this plan unfolds.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >