Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 394/679 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Obtaining positional information in the IEnumerable Select extension method

    - by Kyle Burns
    This blog entry is intended to provide a narrow and brief look into a way to use the Select extension method that I had until recently overlooked. Every developer who is using IEnumerable extension methods to work with data has been exposed to the Select extension method, because it is a pretty critical piece of almost every query over a collection of objects.  The method is defined on type IEnumerable and takes as its argument a function that accepts an item from the collection and returns an object which will be an item within the returned collection.  This allows you to perform transformations on the source collection.  A somewhat contrived example would be the following code that transforms a collection of strings into a collection of anonymous objects: 1: var media = new[] {"book", "cd", "tape"}; 2: var transformed = media.Select( item => 3: { 4: Media = item 5: } ); This code transforms the array of strings into a collection of objects which each have a string property called Media. If every developer using the LINQ extension methods already knows this, why am I blogging about it?  I’m blogging about it because the method has another overload that I hadn’t seen before I needed it a few weeks back and I thought I would share a little about it with whoever happens upon my blog.  In the other overload, the function defined in the first overload as: 1: Func<TSource, TResult> is instead defined as: 1: Func<TSource, int, TResult>   The additional parameter is an integer representing the current element’s position in the enumerable sequence.  I used this information in what I thought was a pretty cool way to compare collections and I’ll probably blog about that sometime in the near future, but for now we’ll continue with the contrived example I’ve already started to keep things simple and show how this works.  The following code sample shows how the positional information could be used in an alternating color scenario.  I’m using a foreach loop because IEnumerable doesn’t have a ForEach extension, but many libraries do add the ForEach extension to IEnumerable so you can update the code if you’re using one of these libraries or have created your own. 1: var media = new[] {"book", "cd", "tape"}; 2: foreach (var result in media.Select( 3: (item, index) => 4: new { Item = item, Index = index })) 5: { 6: Console.ForegroundColor = result.Index % 2 == 0 7: ? ConsoleColor.Blue : ConsoleColor.Yellow; 8: Console.WriteLine(result.Item); 9: }

    Read the article

  • Creating a New XML Publisher Report Using an Existing EBS Purchasing Report

    - by Annemarie Provisero
    ADVISOR WEBCAST: Creating a New XML Publisher Report Using an Existing EBS Purchasing Report PRODUCT FAMILY: EBS - Procurement November 22, 2011 at 9:00 am EST, 12:00 pm, Mid-Atlantic Standard Time, 2:00 pm London, 4:00 pm, Egypt Time This one-hour session is recommended for technical and functional users who want to try and create their own reports in Purchasing based on existing (seeded) oracle report. TOPICS WILL INCLUDE: Introduction to XML Publisher Oracle BI Publisher Desktop Setup and Process Demo References A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Decreasing the Height of the PinkMatter Flamingo Ribbon Bar

    - by Geertjan
    The one and only thing prohibiting wide adoption of PinkMatter's amazing Flamingo ribbon bar integration for NetBeans Platform applications (watch the YouTube movie here and follow the tutorial here) is... the amount of real estate taken up by the height of the taskpane: I was Skyping with Bruce Schubert about this and he suggested that a first step might me to remove the application menu. OK, once that had been done there was still a lot of height: But then I configured a bit further and now have this, which is pretty squishy but at least shows there are possibilities: How to get to the above point? Get the PinkMatter Flamingo ribbon bar from java.net (http://java.net/projects/nbribbonbar), which is now the official place where it is found, and then look in the "Flaming Integration" module. There you'll find com.pinkmatter.modules.flamingo.LayerRibbonComponentProvider. Do the following: Comment out "addAppMenu(ribbon);" in "createRibbon()". That's the end of the application menu. Change the "addTaskPanes(JRibbon ribbon)" method from this... private void addTaskPanes(JRibbon ribbon) { RibbonComponentFactory factory = new RibbonComponentFactory(); for (ActionItem item : ActionItems.forPath("Ribbon/TaskPanes")) {// NOI18N ribbon.addTask(factory.createRibbonTask(item)); } } ...to the following: private void addTaskPanes(JRibbon ribbon) { RibbonComponentFactory factory = new RibbonComponentFactory(); for (ActionItem item : ActionItems.forPath("Ribbon/TaskPanes")) { // NOI18N RibbonTask rt = factory.createRibbonTask(item); List<AbstractRibbonBand<?>> bands = rt.getBands(); for (AbstractRibbonBand arb : bands) { arb.setPreferredSize(new Dimension(40,60)); } ribbon.addTask(rt); } } Hurray, you're done. Not a very great result yet, but at least you've made a start in decreasing the height of the PinkMatter Flamingo ribbon bar. If anyone gets further with this, I'd be very happy to hear about it!

    Read the article

  • WAIT-VHUB ? Whats Going On ?

    - by Neeraj Gupta
    I know many of you have been working on Oracle's Exalogic and other Engineered Systems. With partitions enabled now, things have gone multi dimension. But its fun. Isn't it ? While you have some EoIB configurations together with InfiniBand partitions, the VNICs are not coming up and staying in WAIT-VHUB state ?  Chances are that you have forgot to add InfiniBand Gateway switches' Bridge-X port GUIDs to your partition. These must be added as FULL members for EoIB to work properly. VHUB means a virtual hub in EoIB. Bridge-x is the access point for hosts to work over EoIB so thats why it must be a full member in partition. Step 1: Find out the port GUIDs of your bridge-x devices in IB Gateway switch. # showgwports INTERNAL PORTS: --------------- Device   Port Portname  PeerPort PortGUID           LID    IBState  GWState --------------------------------------------------------------------------- Bridge-0  1   Bridge-0-1    4    0x0010e00c1b60c001 0x0002 Active   Up Bridge-0  2   Bridge-0-2    3    0x0010e00c1b60c002 0x0006 Active   Up Bridge-1  1   Bridge-1-1    2    0x0010e00c1b60c041 0x0026 Active   Up Bridge-1  2   Bridge-1-2    1    0x0010e00c1b60c042 0x002a Active   Up Step 2: Add these port GUIDs to the IB partition associated with EoIB. Login to master SM switch for this task. # smpartition start # smpartition add -pkey <PKey> -port <port GUID> -m full # smpartition commit Enjoy ! 

    Read the article

  • Top tweets SOA Partner Community – November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • Energy Firms Targetted for Sensitive Documents

    - by martin.abrahams
    Numerous multinational energy companies have been targeted by hackers who have been focusing on financial documents related to oil and gas field exploration, bidding contracts, and drilling rights, as well as proprietary industrial process documents, according to a new McAfee report. "It ... speaks to quite a sad state of our critical infrastructure security. These were not sophisticated attacks ... yet they were very successful in achieving their goals," said Dmitri Alperovitch, McAfee's vice president for threat research. Apparently, the attacks can be traced back over several years, creating a sustained security compromise that has provided access to highly sensitive information that is of huge financial value to competitors. The value of IRM as an additional layer of protection is clear. Whether your infrastructure security is in a sad state or is state of the art, breaches are always a possibility - and in any case, a lot of sensitive information is shared with third parties whose infrastructure security might not be as good as yours. IRM protects the individual information assets directly so that, even if infrastructure security is compromised, your critical information is enrypted and trackable and only accessible to authenticated, authorised, audited users. The full McAfee report is available here.

    Read the article

  • Updated Payroll Tax Liability Formula for Dynamics GP

    - by Ryan McBee
    Prior to the latest Payroll Update for Great Plains, you could do an audit check of the Payroll Tax Liability GP calculation by simply taking Federal Tax Witholding + Fica Medicare Withholding times 2 + Fica SSN times 2.  As you probably know by now, the Employers portion of FICA is 6.2% and the Employers portion has been reduced to 4.2%. However, I have had a number of clients contact me and say this formula is no longer applicable and have asked for a revised formula.  The new formula is described below and ties out to a sample Payroll Run using Fabrikam.   As you can see from above, the prior formula is not applicable and the new audit check is as follows; Federal Tax WH  $                  6,655.17   Employee Medicare  $                     408.47   Employees SS  $                  1,746.54   Employer Medicare  $                     408.47   Employer SS  $                  1,746.55 (FICA Owned – FICA Medicare WH)       Total Tax Liability  $               10,965.20   I have talked with Microsoft and at this time, they have no intent on modifying the report to split out the employer (6.2%) and employee (4.2%) FICA portions.

    Read the article

  • Oracle Fusion Applications Design Patterns Now Available

    - by Frank Nimphius
    "The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too.These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when [...]  designing exciting, new, highly usable applications -- in the cloud or on-premise.  Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business.  What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let us know in the comments section or use the contact form provided."

    Read the article

  • Application Module Extension in Oracle Application R12

    - by Manoj Madhusoodanan
    In this blog I will describe how to Extend Application Module.I will explain this based on my previous blog PL/SQL based EO.  I want to extend FndUserAM to add a procedure to raise a custom business event when the FND_USER has created successfully. Here I am using a custom business event "xxcust.oracle.apps.demo_event". Please find the code used in Business Event. TablePackage Following steps needs to perform. 1) Download all files pertaining to "Entity Object Based on PL/SQL" to JDEV_USER_HOME/myprojects and JDEV_USER_HOME/myclasses.If you want to see the content of source java file decompile it and save it in JDEV_USER_HOME/myprojects. 2) Create XXFndUserAM as follows. 3) Add following method to XXFndUserAMImpl.    import oracle.apps.fnd.framework.OAException;   import oracle.apps.fnd.framework.server.OADBTransactionImpl;   import oracle.apps.fnd.wf.bes.BusinessEvent;   import oracle.apps.fnd.wf.bes.BusinessEventException;    import java.sql.Connection;     public void raiseEvent(String userName) {        String eventName = "xxcust.oracle.apps.demo_event";        String eventKey = userName;        Connection conn = ((OADBTransactionImpl)getOADBTransaction()).getJdbcConnection();         BusinessEvent event = null;         try{             event = new BusinessEvent(eventName, eventKey);             /* Setting Parameters */             event.setStringProperty("USER_NAME",userName);             event.setStringProperty("STATUS","User has created sucessfully");             event.raise(conn);             }             catch (BusinessEventException e) {                 throw new OAException("Exception occured when invoking web service - "+e.getMessage());             }             getOADBTransaction().commit();    } 4) Create a controller which extends from xxcust.oracle.apps.fnd.user.webui.CreateFndUserCO.Call raiseEvent method from new controller. 5) Create substitution for FndUserAM. 6) Migrate following files to $JAVA_TOP. xxcustom.oracle.apps.fnd.user.server.FndUserAMImpl.javaxxcustom.oracle.apps.fnd.user.server.XXFndUserAM.xmlxxcustom.oracle.apps.fnd.user.webui.XXCreateFndUserCO.java 8) Migrate the substitution. 9) Restart the server. 10) Personalize the page /xxcust/oracle/apps/fnd/user/webuiCreateFndUserPG and set the new controller. 11) Verify the substitution has properly applied by clicking About the Page. 12) Access the page and create a user. You can the the result of the Business Event.

    Read the article

  • Conditionally Auto-Executing af:query Search Form Based on User Input

    - by steve.muench
    Due to extreme lack of time due to other work priorities -- working hard on some interesting new ADF features for a future major release -- 2010 has not been a banner year for my production of samples to post to my blog, but to show my heart is in the right place I wanted to close out the year by posting example# 160: 160. Conditionally Auto-Executing af:query Search Form Based on User Input Enjoy. Happy New Year.

    Read the article

  • Oracle Dojo "Spezial": Cloud Control Tipps jetzt auch offline nutzen

    - by Ralf Durben (DBA Community)
    Zum Thema Cloud Control wurden in der DBA Community schon diverse Tipps veröffentlicht. Themen wie zum Beispiel ein Funktionalitäts-Überblick, Installationshinweise, Nutzung von Cloud- und Chargeback und vieles mehr wurden behandelt. Auf vemehrten Wunsch gibt es die Tipps jetzt auch offline "für unterwegs" als PDF-Datei für Smartphones und Tablets mit dem Oracle Dojo Nr.3 "Spezial". Damit haben Sie unsere Tipps jederzeit verfügbar.

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Fujitsu Raku-Raku SmartPhone: Japanese Digital Seniors UX Insight from @debralilley

    - by ultan o'broin
    Super blog posting on the super-important subject of digital inclusion by Oracle partner Fujitsu appstech maven and Oracle Applications User Experience FXA-er and ACE Director Debra Lilley (@debralilley). Debra tells us how Fujitsu is enabling digital inclusion for older mobile users in Japan with their  Raku-Raku (??????. ????)smart phone: Fujitsu Raku-Raku - My UX Homework (Raku-Raku means easy or comfortable in Japanese). There are UX mobile, social media, and methodology takeaways there for us in Debra's blog. Fujitsu Raku-Raku Smartphone Demo  I encourage you to read Debra's blog. In it, she makes reference to a tailored social media experience for those digital seniors (???????) as they'd be called in Japan (UK and Ireland uses the term silver surfers). You can find that online experience here. Online Community site for Fujitsu Raku-Raku Smartphone Digital Seniors (English translation via Google Translate) It's an important reminder that UX is global sure, but also that worldwide accessibility and digital inclusion are priorities too for UX. It's vital that we understand such aspects of technology adoption and how the requirements of different categories of technology users can be met. Oracle is committed to providing the best possible user experience for enterprise users of all ages and abilities. That means talking with all sorts of people worldwide and understanding how and why they want to use our technology and what their context of use is. You can read more about Oracle's accessibility program on our corporate website. Proud to say I prompted a few questions in Japan all the way from Ireland. So, UX is not only global but you can drive UX research globally too without ever leaving home. Brilliant job, Debra. Here's to more such joint research creativity and UX collaborations worldwide between us. Wondering where we might go next? And what a fun way to do things too!

    Read the article

  • Solving File Upload Cancel Issue

    - by Frank Nimphius
    In Oracle JDeveloper 11g R1 (I did not test 11g R2) the file upload component is submitted even if users click a cancel button with immediate="true" set. Usually, immediate="true" on a command button by-passes all modle updates, which would make you think that the file upload isn't processed either. However, using a form like shown below, pressing the cancel button has no effect in that the file upload is not suppressed. <af:form id="f1" usesUpload="true">        <af:inputFile label="Choose file" id="fileup" clientComponent="true"                 value="#{FileUploadBean.file}"  valueChangeListener="#{FileUploadBean.onFileUpload}">   </af:inputFile>   <af:commandButton text="Submit" id="cb1" partialSubmit="true"                     action="#{FileUploadBean.onInputFormSubmit}"/>   <af:commandButton text="cancel" id="cb2" immediate="true"/> </af:form> The solution to this problem is a change of the event root, which you can achieve either by setting i) partialSubmit="true" on the command button, or by surrounding the form parts that should not be submitted when the cancel button is pressed with an ii) af:subform tag. i) partialSubmit solution <af:form id="f1" usesUpload="true">      <af:inputFile .../>   <af:commandButton text="Submit" .../>   <af:commandButton text="cancel" immediate="true" partialSubmit="true" .../> </af:form> ii) subform solution <af:form id="f1" usesUpload="true">   <af:subform id="sf1">     <af:inputFile ... />     <af:commandButton text="Submit" ..."/>   </af:subform>   <af:commandButton text="cancel" immediate="true" .../> </af:form> Note that the af:subform surrounds the input form parts that you want to submit when the submit button is pressed. By default, the af:subform only submits its contained content if the submit issued from within.

    Read the article

  • YOUR FREE, EXCLUSIVE, ONLINE UPDATE ON FANTASTIC NEW ORACLE PARTNER OPPORTUNITIES - REGISTER TODAY!

    - by Claudia Costa
    New products. New specializations. New opportunities.There really has never been a better time to be an Oracle partner! Find out exactly what Oracle's "Software. Hardware. Complete" strategy, and the latest developments in the OPN Specialized program, mean for your business.   Register now for the Oracle PartnerNetwork Days Virtual Event on the 29th of June at 11:00h to learn: How to use Oracle's uniquely comprehensive technology stack to grow your business How specialization with Oracle can significantly improve your competitive position How the Oracle PartnerNetwork is evolving to help you succeed Highlights include important updates from Oracle EMEA strategy, partner and product leaders, a live link to the Oracle FY11 Global Partner Kickoff, and interviews with local Oracle partners that are already enjoying the benefits of specialization. The event will also feature: ·         Live Q&A sessions with our speakers, ·         Virtual information booths packed with useful information ·         Opportunities to network with Oracle experts and your peers. ·         Special guest speaker is a former Microsoft executive who has used the principles of specialization with spectacular results to become one of the world's most successful social entrepreneurs. Plus, at the end of the event, you can submit your feedback form for your chance to win two passes to Oracle OpenWorld in San Francisco this September! CLICK HERE TO REGISTER NOW!

    Read the article

  • Retail in New York - a walk down 5th Avenue

    - by sarah.taylor(at)oracle.com
    It's the week of the NRF Big Show and all eyes in the retail industry are on New York. The Big Apple is famous for Big Retail -with a proliferation of incredibly iconic stores. The environment is exciting and familiar even to people visiting this small island for the first time. Most of us have travelled down Fifth Avenue watching movies and TV even if we have never set foot on American soil. I find it one of the most exciting retail cities in the world and I am thrilled this year to be here with so many of Oracle's International retail customers who are joining us for the Retail Exchange. The Oracle program brings retailers from all over the planet together to share ideas and be inspired by New York retail and the NRF event. The show celebrates its 100th year in 2011 and New York itself has been recognized globally as the capital of innovative retail for just as long.  Fifth Avenue is where many global brands have placed their flagship stores, and businesses are in constant competition to set themselves apart from their competitors - both in the store and from the street.  These flag ship retail destinations present what today's customers are finding most exciting and delightful about retail. For the tourist market, they may only visit these stores once, but the impression that a trip to a flagship store leaves with a customer can last a lifetime.  One of the stores that is currently turning heads on Fifth Avenue is Hollister, sister brand to Abercrombie and Fitch, which has filled its shop front with a massive live video (and audio) feed of surfers on the beach in California.  To complete the effect, they also have troughs of water in front of the video screens to bring the sea to the street.  And this isn't the only kind of surfing that retailers are considering today and multi-channel retail is a hot topic that all of the retailers joining the Retail Exchange are considering.   The rest of the world looks to the brands along Fifth Avenue for inspiration - how they take advantage of new opportunities, how they set themselves apart from their competitors and how they keep their products fresh and desirable. With these inspiring pioneers in New York, it's little wonder that NRF's Big Show is so popular, and that New York is viewed as one of the retail capitals of the world. It is a pleasure to be here with so many of the world's greatest international retailers.

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • Managing Your First SharePoint Project or Team

    - by Mark Rackley
    (*editor’s note* If you have proper SharePoint Training, know the difference between a site and a site collection, and have the utmost respect for the knowledge of your SharePoint team skip this blog and go directly to meetdux.com, do not pass go, do not collect $200… otherwise, please proceed) Dear Mr. or Mrs. I-know-nothing-about-SharePoint-but-hey,-I-have-manager-in-my-title-so-I’ll-tell-you-how-to-your-job, Thank you so much for joining the Acme corporation. We appreciate your eagerness and willingness to jump in and help us accomplish all of our goals here at acme (these roadrunner rockets don’t make themselves). You may have noticed that we have this thing called SharePoint lying around and we have invested some time in money to make it not a complete piece of garbage. So, I thought I’d give you some pointers to help make your stay here enjoyable and productive. Yeah… you don’t really know SharePoint Just because you had a mysite at your last organization or had a SharePoint 2003 team site does NOT mean you comprehend the vastness that is SharePoint. You don’t know what’s going on behind the scenes. You don’t know what should and should not be done. No, we CAN’T just query the SQL database directly. Yes, it really does take that long. No, we can’t do that out-of-the-box. Your experience doesn’t mean as much as you think it means… Yes, I’m aware that you co-created the internet with Al Gore and have been managing projects since I was blowing up GI Joe figures with firecrackers, however SharePoint is not like anything you have worked with before from a management perspective. Please don’t tell us the proper way to do our job or tell us how “you” would do it, and PLEASE don’t utter the words “I used to do some .NET development so let me know if you get stuck and need some guidance.” It MAY be possible for a incredible project manager to manage a SharePoint project and not understand the technology, but if you force your ideas on us or treat us like we don’t really know what we’re doing then you will prove yourself to NOT be one of those types. Oh no you didn’t… Please don’t tell us how you can bring in a group of guys of Kazakhstan to do the project for $20/hr. There are many companies out there who can do some really crappy SharePoint work and we don’t want to be stuck maintaining their junk. Do you know what it means to deploy a solution? Neither do some of those companies out there. However, there are are few AWESOME consulting firms out there but $150/hr is cheap for these guys. Believe me, it’s worth it though. You get what you pay for! Show us some respect We truly do appreciate and value your opinion and experience, but when we tell you something is different in SharePoint don’t be condescending and dismiss OUR experience and opinions. We have spent a lot of time and energy learning a very complicated technology that can open up a world of possibilities when used properly. We just want to make sure it is used properly. It’s not the same as .NET development. It’s not like a regular web application. There’s more going on behind the scenes than you can possibly fathom. Have a little faith in us please and listen when we talk. You may actually learn a thing or two. Take some time to learn the technology There is hope… you don’t have to be totally worthless. Take some time to learn SharePoint. Learn what it is and what it can do. Invest some time in learning our SharePoint environment. What’s our logical architecture and taxonomy? What governance do we have in place? If you just thought “huh?” then yes, I’m talking to you. Sincerely, Your SharePoint Team (This rant is not pointed at any particular organization or person. If you think it’s about you, you are wrong. This is just a general rant based upon things people have told me and things I’ve seen. If you don’t think it applies to you, please move on. If you think you might be guilty of handling your SharePoint team the wrong way, then just please listen, learn, and have a little faith in your team. You all have the same goal in mind. Also, take the time to learn something about SharePoint, you will all be less frustrated with each other.)

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • K-12 and Cloud considerations

    - by user736511
    Much like every other Public Sector organization, school districts in the US and Canada are under tremendous pressure to deliver consistent and modern services while operating with reduced budgets, IT personnel shortages, and staff attrition.  Electronic/remote learning and the need for immediate access to resources such as grades, calendars, curricula etc. are straining IT environments that were already burdened with meeting privacy requirements imposed by both regulators and parents/students.  One area viewed as a solution to at least some of the challenges is the use of "Cloud" in education.  Although the concept of "Cloud" is nothing new in education with many providers supplying educational material over the web, school districts defer previously-in-house-hosted services to established commercial vendors to accommodate document sharing, app hosting, and even e-mail.  Doing so, however, does not reduce an important risk, that of privacy.  As always, Cloud implementations are viewed in a skeptical manner because of the perceived reduction in sensitive data management and protection thereof, although with a careful approach and the right tooling, the benefits realized by Clouds can expand to security and privacy.   Oracle's comprehensive approach to data privacy and identity management ensures that the necessary tools are available to support regulations, operational efficiencies and strong security regardless of where the sensitive data is stored - on premise or a Cloud.  Common management tools, role-based access controls, access policy management and engineered systems provided by Oracle can be the foundational pieces on which school districts can build their Cloud implementations without having to worry about security itself. Their biggest challenge, and it is a positive one, is how to best take advantage of Oracle's DB Security and IDM functionality to reduce operational costs while enabling modern applications and data delivery to those who needs access to it. For more information please refer to http://www.oracle.com/us/products/middleware/identity-management/overview/index.html and http://www.oracle.com/us/products/database/security/overview/index.html.

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >