Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 689/1024 | < Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >

  • Can't find instructions how to use Windows 7 drivers on Windows Server 2008 R2

    - by Robert Koritnik
    Maybe I should post this to http://www.serverfault.com. Windows 7 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 doesn't event though it's practically the same OS when it comes to drivers. But I know that this has a very good reason. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are normally desktop machines and would need Windows 7 driver spectrum... Question I know I've been reading about some trick on the internet that first installed Windows 7 on the machine, than do something to get either all Windows 7 driver collection or just those installed, and then install Windows Server 2008 and use those drivers. The thing is: I can't seem to find these instructions on the internet any more. If anybody knows where these are please provide the link for the rest of us.

    Read the article

  • Can't find instructions how to use Windows 7 drivers on Windows Server 2008 R2

    - by Robert Koritnik
    Windows 7 x64 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 R2 doesn't. Event though it's practically the same OS when it comes to drivers. I know there's a very good reason for this difference. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are usually desktop machines (a bit more powerful though) and would benefit from the whole driver spectrum that Windows 7 offers... Question I know I've been reading on the internet about some trick where you first install Windows 7, than do something to get either all Windows 7 drivers or just those installed, and then install Windows Server 2008 R2 and use those drivers of Windows 7. The thing is I can't find these instructions on the internet any more. If anybody knows where they are please provide the link for the rest of us.

    Read the article

  • Java Spotlight Episode 103: 2012 Duke Choice Award Winners

    - by Roger Brinkley
    Our annual interview with the 2012 Duke Choice Award Winners recorded live at the JavaOne 2012. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes Events Oct 13, Devoxx 4 Kids Nederlands Oct 15-17, JAX London Oct 20, Devoxx 4 Kids Français Oct 22-23, Freescale Technology Forum - Japan, Tokyo Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 31, JFall, Netherlands Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature Interview Duke Choice Award Winners 2012 - Show Presentation London Java CommunityThe second user group receiving a Duke’s Choice Award this year, the London Java Community (LJC) and its users have been active in the OpenJDK, the Java Community Process (JCP) and other efforts within the global Java community. Student Nokia Developer GroupThis year’s student winner, Ram Kashyap, is the founder and president of the Nokia Student Network, and was profiled in the “The New Java Developers” feature in the March/April 2012 issue of Java Magazine. Since then, Ram has maintained a hectic pace, graduating from the People’s Education Society Institute of Technology in Bangalore, India, while working on a Java mobile startup and training students on Java ME. Jelastic, Inc.Moving existing Java applications to the cloud can be a daunting task, but startup Jelastic, Inc. offers the first all-Java platform-as-a-service (PaaS) that enables existing Java applications to be deployed in the cloud without code changes or lock-in. NATOThe first-ever Community Choice Award goes to the MASE Integrated Console Environment (MICE) in use at NATO. Built in Java on the NetBeans platform, MICE provides a high-performance visualization environment for conducting air defense and battle-space operations. DuchessRather than focus on a specific geographic area like most Java User Groups (JUGs), Duchess fosters the participation of women in the Java community worldwide. The group has more than 500 members in 60 countries, and provides a platform through which women can connect with each other and get involved in all aspects of the Java community. AgroSense ProjectImproving farming methods to feed a hungry world is the goal of AgroSense, an open source farm information management system built in Java and the NetBeans platform. AgroSense enables farmers, agribusinesses, suppliers and others to develop modular applications that will easily exchange information through a common underlying NetBeans framework. Apache Software Foundation Hadoop ProjectThe Apache Software Foundation’s Hadoop project, written in Java, provides a framework for distributed processing of big data sets across clusters of computers, ranging from a few servers to thousands of machines. This harnessing of large data pools allows organizations to better understand and improve their business. Parleys.comE-learning specialist Parleys.com, based in Brussels, Belgium, uses Java technologies to bring online classes and full IT conferences to desktops, laptops, tablets and mobile devices. Parleys.com has hosted more than 1,700 conferences—including Devoxx and JavaOne—for more than 800,000 unique visitors. Winners not presenting at JavaOne 2012 Duke Choice Awards BOF Liquid RoboticsRobotics – Liquid Robotics is an ocean data services provider whose Wave Glider technology collects information from the world’s oceans for application in government, science and commercial applications. The organization features the “father of Java” James Gosling as its chief software architect.United Nations High Commissioner for RefugeesThe United Nations High Commissioner for Refugees (UNHCR) is on the front lines of crises around the world, from civil wars to natural disasters. To help facilitate its mission of humanitarian relief, the UNHCR has developed a light-client Java application on the NetBeans platform. The Level One registration tool enables the UNHCR to collect information on the number of refugees and their water, food, housing, health, and other needs in the field, and combines that with geocoding information from various sources. This enables the UNHCR to deliver the appropriate kind and amount of assistance where it is needed.

    Read the article

  • Free CodeSmith License!

    - by Randy Walker
    The catch?  Attend the Ozarks .Net User Group meeting on April 1st. Here’s a list of the other prizes for the event GRAND PRIZE 1 - iPad (Wi-Fi 16GB) THIRD PARTY COMPONENTS 6 - Telerik Premium Collection 5 - Infragistics NetAdvantage for .NET 1 - Nevron Chart for .NET Lite DevExpress Xceed PRODUCTIVITY 2 - CodeRush with Refactor! Pro 2 - ReSharper CodeSmith GAMES 3 - Halo3 ODST (XBox 360) 3 - Forza Motorsport (Xbox 360) OTHER SOFTWARE 3 - Windows 7 Ultimate 2 - Microsoft Office Standard 2007 HARDWARE 2 - Microsoft Arc Mouse BOOKS 12 - OReilly eBooks 12 - Microsoft Press books 5 - Apress books 3 - Addison-Wesley books 2 - Manning books 2 - Sams books The Info: "Be a Professional Developer and Write Clean Code!" by Claudio Lassala on April 1, 2010 PRESENTATION TOPIC "Be a Professional Developer and Write Clean Code!" - by Claudio Lassala Poorly written code can be created quickly, but it comes at a cost of high maintenance. Most of the time, code can be improved easily by following some simple practices. Professional developers should know these practices and tools and apply it to their work every day. This session will cover the importance of writing clean code, the kind of attitude all developers should have towards the code they produce, as well as the practices and tools that can be used to aid you in becoming a better developer. BIOGRAPHY Claudio Lassala is a Senior Developer at EPS Software Corp. He has presented several lectures at Microsoft events such as PDC Brazil and various other Microsoft seminars, as well as several conferences and user groups across North America and Brazil. He is a multiple winner of the Microsoft MVP Award since 2001 (for Visual FoxPro in 2001-2002, and for C# ever since), an INETA speaker, and also holds the MCSD for .NET certification. He has articles published on several magazines, such as MSDN Brazil Magazine, CoDe Magazine, UTMag, Developers Magazine, and FoxPro Advisor. More detailed information regarding his presentations and articles can be found in his MVP Profile. You can also read more about Claudio on his blog or on Twitter Schedule 5:30 PM – 6:30 PM Social Networking 6:30 PM - 7:00 PM  Prizes 7:00 PM - 8:30 PM Presentation:  "Be a Professional Developer and Write Clean Code!" by Claudio Lassala 8:30 PM - 9:00 PM Wrap-Up

    Read the article

  • Silverlight Cream for June 17, 2010 -- #885

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Antoni Dol, Jeff Prosise, David Anson, and John Papa. Shoutouts: Rob Davis has a World Cup Football Stadium tour in Silverlight, Azure, and Bing Maps up: The World Cup Map... cruise around this... tons of features. The Silverlight Team Blog reports that NBC sports is streaming the US Open in Silverlight Adam Kinney announced Expression Studio 4 Launch keynote videos are available From SilverlightCream.com: Data Driven Applications with MVVM Part III: Validation, Bringing the UI Closer Zoltan Arvai's 3rd (and final) part of the Data-Driven MVVM apps is up at SilverlightShow. In this final section he is focusing on validation, and discussion of closer integration to the view. Focus on FocusVisualElement in Silverlight buttons Antoni Dol has a cool post up about the FocusVisualElement, and uses a button to demonstrate how it can be used. Dynamic XAP Discovery with Silverlight MEF Jeff Prosise is discussing Silverlight and MEF ... but better than the normal loading XAP files ... he's doing dynamic discovery of XAP files ... and makes it look easy! Updated analysis of two ways to create a full-size Popup in Silverlight David Anson revisits a prior post with an eye toward Silverlight 4. The feature he's discussing is that you can now hook the Resized event without having browser zoom disabled... and he demonstrates it's use in the code from the old post. Silverlight as a Transmedia Platform (Silverlight TV #33) Jesse Liberty joins John Papa this week with Silverlight TV #33, discussing Transmedia and Silverlight as a Transmedia Platform. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • IIS 7 503 error, application pool stop crash, defdoc.dll could not be loaded due to a configuration

    - by optician
    Hi All, Currently trying to get iis 7 to work, but every time I request a page, the application pool goes into stopped status. In the event log this is what comes back. The Module DLL 'C:\Windows\System32\inetsrv\defdoc.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a x86 processor architecture. The data field contains the error number. I've already re installed iis, any other ideas, I read that someone fixed this by downloading the dll again, but this seems like an odd solution. Thanks. EDIT I have now replaced the file with one I downloaded off the internet, and now it says The Module DLL 'C:\Windows\System32\inetsrv\protsup.dll' could not be loaded due to a configuration problem. I hope I don't have to get 100's of these.

    Read the article

  • Kinect Presentation at Chippewa Valley Code Camp

    - by mbcrump
    On November 12th 2011, I gave a presentation at Chippewa Valley Code Camp titled, “Kinecting the Dots with the Kinect SDK”. As promised, here is the Slides / Code / Resources to my talk. (click image to download slides) The Kinect for Windows SDK beta is a starter kit for applications developers that includes APIs, sample code, and drivers. This SDK enables the academic research and enthusiast communities to create rich experiences by using Microsoft Xbox 360 Kinect sensor technology on computers running Windows 7. Resources : Download Kinect for Windows SDK beta 2 – You can either download a 32 or 64 bit SDK depending on your OS. FAQ for Kinect for Windows SDK Beta 2 Kinect for Windows SDK Quickstarts for Windows SDK Beta 2 Information on upgrading Kinect Applications to MS SDK Beta 2. – Brand new post by me on how to upgrade Kinect applications to Beta 2. Getting the Most out of the Kinect SDK by me for the Microsoft MVP Award Program Blog. My “Busy Developers Guide to the Kinect SDK” (still references Beta 1 – but most information is still valid) Helpful toolkits / templates mentioned in the talk. Coding4Fun Kinect Toolkit – Lots of extension methods and controls for WPF and WinForms. KinectContrib – Visual Studio 2010 Templates (not updated for Beta 2 as of 11/14/2011). Fun Projects for learning purposes (all updated to Beta 2): Kinect Mouse Cursor – Use your hands to control things like a mouse created by Brian Peek. Kinect Paint – Basically MS Paint but use your hands! Kinecting the Dots: Adding Buttons to your Kinect Application (not on Beta 2 – but check out the guide by me on how to do this) Thanks for attending! I had a really great time at the event and would like to personally thank everyone for coming out to support the local community.  Thanks for reading. Subscribe to my feed

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • Sonicwall with failover, multiple subnets, and preferred WAN interface per subnet

    - by Dan
    I am trying to set up my Sonicwall TZ-210 as follows: Two WAN interfaces (different ISPs), set up in failover mode. Two LAN interfaces with different subnets Each LAN subnet will have a preferred outbound WAN interface, but would failover when necessary. In this way, each ISP is being used for a separate subnet of my network, but both subnets could failover to the other ISP if their primary went down. I know how to do 1 and 2, but I don't know how to do 3. I could set up a route for each subnet to go through a specific interface, but what would happen in the event of a failover? Would it automatically update those routes? Thanks!

    Read the article

  • Node.js MMO - process and/or map division

    - by Gipsy King
    I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1: Tie a process to a map location (like a map-cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros: Easier to implement Cons: Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone/process is always busy (has players in it), it doesn't really load-balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b: Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2: Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player-process-location tracking node), and instructs their matching processes to relay the action. Pros: Easy load balancing: spawn more processes Avoids player reconnecting / borders between zones Cons: Harder to implement and test Additional steps of finding players, and relaying event/action to another process If the player-location-process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track.

    Read the article

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

  • SQL Server Restore from Backup, Just primary File Group

    - by bladefist
    Thankfully, this question is just a what-if, and I am not in an emergency right now. But I have created a file group in my database (sql server 2008), and moved some massive data tables over to it. Leaving my websites central tables in the Primary file group. In the event of a restore, can I restore just the primary file group, and have a working database? Or do I have to restore both file groups? I don't want my site down for ages while it restores the 2nd file group.

    Read the article

  • How to protect my VPS from winlogon RDP spam requests

    - by Valentin Kuzub
    I got some hackers constantly hitting my RDP and generating thousands of audit failures in event log. Password is pretty elaborate so I dont think bruteforcing will get them anywhere. I am using VPS and I am pretty much a noob in Windows Server security (am a programmer myself and its my webserver for my site). Which is a recommended approach to deal with this? I would rather block IPs after some amount of failures for example. Sorry if question is not appropriate.

    Read the article

  • Twitter API Voting System

    - by Richard Jones
    So I blatantly got this idea from the MIX 10 event. At MIX they held a rockband talent competition type thing (I’m not quite sure of all the details).    But the interesting part for me is how they collected votes. They used Twitter (what else, when you have a few thousand geeks available to you). The basic idea was that you tweeted your vote with a # tag, i.e #ROCKBANDVOTE vote Richard How cool….    So the question is how do you write something to collate and count all the votes?   Time to press the magic Visual Studio new Project button… Twitter has a really nice API that can be invoked from .NET.   This is the snippet of code that will search for any given phrase i.e #ROCKBANDVOTE   public static XmlDocument GetSearchResults(string searchfor) { return GetSearchResults(searchfor, ""); }   public static XmlDocument GetSearchResults(string searchfor, string sinceid) { XmlDocument retdoc = new XmlDocument();   try { string url = "http://search.twitter.com/search.atom?&q=" + searchfor; if (sinceid.Length > 0) url += "since_id=" + sinceid; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; WebResponse res = request.GetResponse(); retdoc.Load(res.GetResponseStream()); res.Close();   } catch { } return retdoc; } } I’ve got two overloads, that optionally let you pass in the last ID to look for as well as what you want to search for. Note that Twitter rate limits the amount of requests you can send,  see http://apiwiki.twitter.com/Rate-limiting So realistically I wanted my app to run every hour or so and only pull out results that haven’t been received before (hence the overload to pass in the sinceid parameter). I’ll post the code when finished that parses the returned XML.

    Read the article

  • Can't log in using sa account for sql server 2008

    - by tessa
    I installed SQL Server 2008. During the install I set it to mixed mode authentication and set the password for what I assume is the sa account. In the configuration manager I set tcp/ip and named pipes to enabled. When I open SQL Server Management Studio and try to log in - username: sa, password: whatIjustsetintheinstall, it fails with the error: Login failed for user sa. (error 18456). The error in Event Viewer is - Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]. I know the password is right because I just set it. What am I doing wrong here? Is sa not the right user to be logging in with mixed mode? I've been reading through forum after forum but just cannot find anything that works.

    Read the article

  • Frank Buytendijk on Prahalad, Business Best Practices

    - by Bob Rhubart
      In his video on the questionable value of some business best practices, Frank Buytendijk mentions a recent HBR article by business guru C.K. Prahalad. I just learned that Prahalad passed away this past weekend at the age of 68. (Information Week obit) A couple of years ago I had the good fortune to attend Mr. Prahalad’s keynote address at a Gartner event.  He had an audience of software architects absolutely mesmerized as he discussed technology’s role in the changing nature of business competition.  The often dysfunctional relationship between IT and business has and will probably always be hot-button issue. But during Prahalad’s keynote,  there was a palpable sense that the largely technical audience was having some kind of breakthrough, that they had achieved a new level of understanding about the importance of the relationship between the two camps. Fortunately, Prahalad leaves behind a significant body of work that will remain a valuable resource as business and the technology that supports it continues to evolve. Technorati Tags: business best practices,enterprise architecture,prahalad,oracle del.icio.us Tags: business best practices,enterprise architecture,prahalad,oracle

    Read the article

  • Looking for application performance tracking software

    - by JavaRocky
    I have multiple java-based applications which produce statistics on how long method calls take. Right now the information is being written into a log file and I analyse performance that way. However with multiple apps and more monitoring requirements this is being becoming a bit overwhelming. I am looking for an application which will collect stats and graph them so I can analyse performance and be aware of performance degradation. I have looked at Solarwinds Application Performance Monitoring, however this polls periodically to gather information. My applications are totally event based and we would like to graph and track this accordingly. I almost started hacking together some scripts to produce Google Charts but surely there are applications which do this already. Suggestions?

    Read the article

  • "Has Oracle written the script for CRM success?" - Anthony Lye on Customer Experience at BAFTA

    - by Richard Lefebvre
    Anthony Lye showcased Oracle Fusion CRM at a BAFTA gathering, and MyCustomer.com covered the story under the title of "Has Oracle written the script for CRM success?' According to MyCustomer.com, "Oracle's SVP of CRM Anthony Lye set the scene for the event, suggesting products are becoming commoditized, so that the only way to differentiate is through the relationship with the customer. But he warned that "customers are more and more in control of that relationship, so you have to provide great experiences for them." "The quickest win within your organization to create a single view is to connect your marketing organization with your selling organization, align goals, processes, people and technology," Anthony explained.   "And this is a transition that is already happening - "VPs of marketing have started turning up in the same meetings as VPs of sales, we have started to see that they want to work together" - but this convergence needs nurturing." "In Fusion there are capabilities to align the organisation - we enable marketing on the same platform to build campaigns connected to sales stages. It can affect leads and opportunities at the top end of the funnel. And the selling organisation can take advantage of marketing content - the materials that are exclusively within marketing can now be used by sales. Your sales teams have been campaigning forever, but it's usually by email, it isn't aligned with the corporate message and it's being sent to people it shouldn't. By aligning them we can increase output and the quality of that output." Anthony concluded: "Operating in a disconnected fashion having two distinct systems will cost you time and money. So we feel there's a material advantage in a solution like this." Enjoy the full story at http://www.mycustomer.com/topic/marketing/has-oracle-written-script-crm-success/139958

    Read the article

  • Validating Petabytes of Data with Regularity and Thoroughness

    - by rickramsey
    by Brian Zents When former Intel CEO Andy Grove said “only the paranoid survive,” he wasn’t necessarily talking about tape storage administrators, but it’s a lesson they’ve learned well. After all, tape storage is the last line of defense to prevent data loss, so tape administrators are extra cautious in making sure their data is secure. Not surprisingly, we are often asked for ways to validate tape media and the files on them. In the past, an administrator could validate the media, but doing so was often tedious or disruptive or both. The debut of the Data Integrity Validation (DIV) and Library Media Validation (LMV) features in the Oracle T10000C drive helped eliminate many of these pains. Also available with the Oracle T10000D drive, these features use hardware-assisted CRC checks that not only ensure the data is written correctly the first time, but also do so much more efficiently. Traditionally, a CRC check takes at least 25 seconds per 4GB file with a 2:1 compression ratio, but the T10000C/D drives can reduce the check to a maximum of nine seconds because the entire check is contained within the drive. No data needs to be sent to a host application. A time savings of at least 64 percent is extremely beneficial over the course of checking an entire 8.5TB T10000D tape. While the DIV and LMV features are better than anything else out there, what storage administrators really need is a way to check petabytes of data with regularity and thoroughness. With the launch of Oracle StorageTek Tape Analytics (STA) 2.0 in April, there is finally a solution that addresses this longstanding need. STA bundles these features into one interface to automate all media validation activities across all Oracle SL3000 and SL8500 tape libraries in an environment. And best of all, the validation process can be associated with the health checks an administrator would be doing already through STA. In fact, STA validates the media based on any of the following policies: Random Selection – Randomly selects media for validation whenever a validation drive in the standalone library or library complex is available. Media Health = Action – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Action.” You can specify from one to five exchanges. Media Health = Evaluate – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Evaluate.” You can specify from one to five exchanges. Media Health = Monitor – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Monitor.” You can specify from one to five exchanges. Extended Period of Non-Use – Selects media that have not had an exchange for a specified number of days. You can specify from 365 to 1,095 days (one to three years). Newly Entered – Selects media that have recently been entered into the library. Bad MIR Detected – Selects media with an exchange resulting in a “Bad MIR Detected” error. A bad media information record (MIR) indicates degraded high-speed access on the media. To avoid disrupting host operations, an administrator designates certain drives for media validation operations. If a host requests a file from media currently being validated, the host’s request takes priority. To ensure that the administrator really knows it is the media that is bad, as opposed to the drive, STA includes drive calibration and qualification features. In addition, validation requests can be re-prioritized or cancelled as needed. To ensure that a specific tape isn’t validated too often, STA prevents a tape from being validated twice within 24 hours via one of the policies described above. A tape can be validated more often if the administrator manually initiates the validation. When the validations are complete, STA reports the results. STA does not report simply a “good” or “bad” status. It also reports if media is even degraded so the administrator can migrate the data before there is a true failure. From that point, the administrators’ paranoia is relieved, as they have the necessary information to make a sound decision about the health of the tapes in their environment. About the Photograph Photograph taken by Rick Ramsey in Death Valley, California, May 2014 - Brian Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • Code is not the best way to draw

    - by Bertrand Le Roy
    It should be quite obvious: drawing requires constant visual feedback. Why is it then that we still draw with code in so many situations? Of course it’s because the low-level APIs always come first, and design tools are built after and on top of those. Existing design tools also don’t typically include complex UI elements such as buttons. When we launched our Touch Display module for Netduino Go!, we naturally built APIs that made it easy to draw on the screen from code, but very soon, we felt the limitations and tedium of drawing in code. In particular, any modification requires a modification of the code, followed by compilation and deployment. When trying to set-up buttons at pixel precision, the process is not optimal. On the other hand, code is irreplaceable as a way to automate repetitive tasks. While tools like Illustrator have ways to repeat graphical elements, they do so in a way that is a little alien and counter-intuitive to my developer mind. From these reflections, I knew that I wanted a design tool that would be structurally code-centric but that would still enable immediate feedback and mouse adjustments. While thinking about the best way to achieve this goal, I saw this fantastic video by Bret Victor: The key to the magic in all these demos is permanent execution of the code being edited. Whenever a parameter is being modified, everything is re-executed immediately so that the impact of the modification is instantaneously visible. If you do this all the time, the code and the result of its execution fuse in the mind of the user into dual representations of a single object. All mental barriers disappear. It’s like magic. The tool I built, Nutshell, is just another implementation of this principle. It manipulates a list of graphical operations on the screen. Each operation has a nice editor, and translates into a bit of code. Any modification to the parameters of the operation will modify the bit of generated code and trigger a re-execution of the whole program. This happens so fast that it feels like the drawing reacts instantaneously to all changes. The order of the operations is also the order in which the code gets executed. So if you want to bring objects to the front, move them down in the list, and up if you want to move them to the back: But where it gets really fun is when you start applying code constructs such as loops to the design tool. The elements that you put inside of a loop can use the loop counter in expressions, enabling crazy scenarios while retaining the real-time edition features. When you’re done building, you can just deploy the code to the device and see it run in its native environment: This works thanks to two code generators. The first code generator is building JavaScript that is executed in the browser to build the canvas view in the web page hosting the tool. The second code generator is building the C# code that will run on the Netduino Go! microcontroller and that will drive the display module. The possibilities are fascinating, even if you don’t care about driving small touch screens from microcontrollers: it is now possible, within a reasonable budget, to build specialized design tools for very vertical applications. Direct feedback is a powerful ally in many domains. Code generation driven by visual designers has become more approachable than ever thanks to extraordinary JavaScript libraries and to the powerful development platform that modern browsers provide. I encourage you to tinker with Nutshell and let it open your eyes to new possibilities that you may not have considered before. It’s open source. And of course, my company, Nwazet, can help you develop your own custom browser-based direct feedback design tools. This is real visual programming…

    Read the article

  • Exception Handling Frequency/Log Detail

    - by Cyborgx37
    I am working on a fairly complex .NET application that interacts with another application. Many single-line statements are possible culprits for throwing an Exception and there is often nothing I can do to check the state before executing them to prevent these Exceptions. The question is, based on best practices and seasoned experience, how frequently should I lace my code with try/catch blocks? I've listed three examples below, but I'm open to any advice. I'm really hoping to get some pros/cons of various approaches. I can certainly come up with some of my own (greater log granularity for the O-C approach, better performance for the Monolithic approach), so I'm looking for experience over opinion. EDIT: I should add that this application is a batch program. The only "recovery" necessary in most cases is to log the error, clean up gracefully, and quit. So this could be seen to be as much a question of log granularity as exception handling. In my mind's eye I can imagine good reasons for both, so I'm looking for some general advice to help me find an appropriate balance. Monolitich Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } catch (Exception e) { Log(e); } finally { CleanUp(); } } public static void Step1(){ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } public static void Step2(){ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } public static void Step3(){ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } } Delegated Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Obsessive-Compulsive Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Other approaches welcomed and encouraged. Above are examples only.

    Read the article

  • Algorithm for querying linearly through a non-linear list of questions

    - by JoshLeaves
    For a multiplayers trivia game, I need to supply my users with a new quizz in a desired subject (Science, Maths, Litt. and such) at the start of every game. I've generated about 5K quizzes for each subject and filled my database with them. So my 'Quizzes' database looks like this: |ID |Subject |Question +-----+------------+---------------------------------- | 23 |Science | What's water? | 42 |Maths | What's 2+2? | 99 |Litt. | Who wrote "Pride and Prejudice"? | 123 |Litt. | Who wrote "On The Road"? | 146 |Maths | What's 2*2? | 599 |Science | You know what's cool? |1042 |Maths | What's the Fibonacci Sequence? |1056 |Maths | What's 42? And so on... (Much more detailed/complex but I'll keep the exemple simple) As you can see, due to technical constraints (MongoDB), my IDs are not linear but I can use them as an increasing suite. So far, my algorithm to ensure two users get a new quizz when they play together is the following: // Take the last played quizzes by P1 and P2 var q_one = player_one.getLastPlayedQuizz('Maths'); var q_two = player_two.getLastPlayedQuizz('Maths'); // If both of them never played in the subject, return first quizz in the list if ((q_one == NULL) && (q_two == NULL)) return QuizzDB.findOne({subject: 'Maths'}); // If one of them never played, play the next quizz for the other player // This quizz is found by asking for the first quizz in the desired subject where // the ID is greater than the last played quizz's ID (if the last played quizz ID // is 42, this will return 146 following the above example database) if (q_one == NULL) return QuizzDB.findOne({subject: 'Maths', ID > q_two}); if (q_two == NULL) return QuizzDB.findOne({subject: 'Maths', ID > q_one}); // And if both of them have a lastPlayedQuizz, we return the next quizz for the // player whose lastPlayedQuizz got the higher ID if (q_one > q_two) return QuizzDB.findOne({subject: 'Maths', ID > q_one}); else return QuizzDB.findOne({subject: 'Maths', ID > q_two}); Now here comes the real problem: Once I get to the end of my database (let's say, P1's last played quizz in 'Maths' is 1056, P2's is 146 and P3 is 1042), following my algorithm, P1's ID is the highest so I ask for the next question in 'Maths' where ID is superior to 1056. There is nothing, so I roll back to the beginning of my quizz list (with a random skipper to avoid having the first question always show up). P1 and P2's last played will then be 42 and they will start fresh from the beginning of the list. However, if P1 (42) plays against P3 (1042), the resulting ID will be 1056...which P1 already played two games ago. Basically, players who just "rolled back" to the beginning of the list will be brought back to the end of the list by players who still haven't rolled back. The rollback WILL happen in the end, but it'll take time and there'll be a "bottleneck" at the beginning and at the end. Thus my question: What would be the best algorith to avoid this bottleneck and ensure players don't get stuck endlessly on the same quizzes? Also bear in mind that I've got some technical constraints: I can't get a random question in a subject (ie: no "QuizzDB.findOne({subject: 'Maths'}).skip(random());"). It's cool to skip on one to twenty records, but the MongoDB documentation warns against skipping too many documents. I would like to avoid building an array of every quizz played by each player and find the next non-played in the database with a $nin. Thanks for your help

    Read the article

  • What I've Gained from being a Presenter at Tech Events

    - by MOSSLover
    Originally posted on: http://geekswithblogs.net/MOSSLover/archive/2014/06/12/what-ive-gained-from-being-a-presenter-at-tech-events.aspxI know I fail at blogging lately.  As I've said before life happens and it gets in the way of best laid out plans.  I thought about creating some type of watch with some app that basically dictates using dragon naturally speaking to Wordpress, but alas no time and the write processing capabilities just don't exist yet.So to get to my point Alison Gianotto created this blog post: http://www.snipe.net/2014/06/why-you-should-stop-stalling-and-start-presenting/.  I like the message she has stated in this post and I want to share something personal.It was around 2007 that I was seriously looking into leaving the technology field altogether and going back to school.  I was calling places like Washington University in Saint Louis and University of Missouri Kansas City asking them how I could go about getting into some type of post graduate medical school program.  My entire high school career was based on Medical Explorers and somehow becoming a doctor.  I did not want to take my hobby and continue using it as a career mechanism.  I was unhappy, but I didn't realize why I was unhappy at the time.  It was really a lot of bad things involving the lack of self confidence and self esteem.  Overall I was not in a good place and it took me until 2011 to realize that I still was not in a good place in life.So in about April 2007 or so I started this blog that you guys have been reading or occasionally read.  I kind of started passively stalking people by reading their blogs in the SharePoint and .Net communities.  I also started listening to .Net Rocks & watching videos on their corresponding training for SharePoint, WCF, WF, and a bunch of other technologies.  I wanted more knowledge, so someone suggested I go to a user group.  I've told this story before about how I met Jeff Julian & John Alexander, so that point I will spare you the details.  You know how I got to my first user group presentation and how I started getting involved with events, so I'll also spare those details.The point I want to touch on is that I went out I started speaking and that path I took helped me gain the self confidence and self respect I needed.  When I first moved to NYC I couldn't even ride a subway by myself or walk alone without getting lost.  Now I feel like I can go out and solve any problem someone throws at me.  So you see what Alison states in her blog post is true and I am a great example to that point.  I stood in front of 800 or so people at SharePoint Conference in 2011 and spoke about a topic.  In 2007 I would have hidden or stuttered the entire time.  I have now spoken at over 70 events and user groups.  I am a top 25 influencer in my technology.  I was a most valued professional for years in a row in Microsoft SharePoint.  People are constantly trying to gain my time, so that they can pick my brain for solutions and other life problems.  I went from maybe five or six friends to over hundreds of friends in various cities across the globe.  I'm not saying it's an instant fame and it doesn't take a ton of work, but I have never looked back once at my life and regretted the choice I made in 2007.  It has lead me to a lot of other things in my life, including more positivity and happiness.  If anyone ever wants to contact me and pick my brain on a presentation go ahead.  If you want me to help you find the best meetup that suits you for that presentation I can try to help too (I might be a little more helpful in the Microsoft or iOS arenas though).  The best thing I can state is don't be scared just do it.  If you need an audience I can try to pencil it in my schedule.  I can't promise anything, but if you are in NYC I can at least try.

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • UPK Professional Customer Success Story: Medtronic

    - by [email protected]
    In case you missed the live event, be sure to listen to last week's UPK Customer iSeminar featuring Medtronic. This was the first iSeminar in our quarterly series to showcase UPK Professional (UPK and Knowledge Pathways). Donna Miller and Staci Gilbert gave viewers an inside look at samples of Medtronic's content as they shared their experiences, methodology and best practices for use of the solution. Here are some highlights of the call: • Medtronic initially purchased UPK Professional to support a multi-year, global SAP rollout for 9,000 end users located in 24 countries. • As time went on, they expanded their use of UPK Professional to include several of their other enterprise applications: PeopleSoft, Siebel CRM, Hyperion Financial Management, a number of SAP bolt-ons, Documentum, TrackWise, and many others. • In combination with their Saba LMS, UPK Professional has allowed Medtronic to create, deploy, track and certify consistent end user training for critical transactions and processes across their organization worldwide - essential for a company in a heavily regulated industry. • For key pieces of content or certain end user populations, some Medtronic business units localize/translate the global UPK content. Staci demonstrated examples of their SAP content which has been translated into Japanese. • In the live SAP environment, end users rely on UPK's context sensitive in-application performance support. Medtronic has found this to be very helpful post go-live, giving just-in-time support so end users are confident in a new system or when performing tasks they don't often touch (at quarter or year end). UPK also serves as Medtronic's internal Google. • Medtronic has realized savings on many fronts: reduction in support calls due to in-application performance support, elimination of their training clients, and speedier training (1.5 days rather than 5-7 days) of temporary workers by moving from ILT to a blended solution that includes UPK simulations for eLearning. Thanks again to Donna and Staci for an exceptional presentation. They offered so many great examples for anyone who's looking for ways to get more out of UPK or interested in learning about UPK Professional: Knowledge Pathways. - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

< Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >