Search Results

Search found 10838 results on 434 pages for 'adf task flow'.

Page 340/434 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Mixing Forms and Token Authentication in a single ASP.NET Application

    - by Your DisplayName here!
    I recently had the task to find out how to mix ASP.NET Forms Authentication with WIF’s WS-Federation. The FormsAuth app did already exist, and a new sub-directory of this application should use ADFS for authentication. Minimum changes to the existing application code would be a plus ;) Since the application is using ASP.NET MVC this was quite easy to accomplish – WebForms would be a little harder, but still doable. I will discuss the MVC solution here. To solve this problem, I made the following changes to the standard MVC internet application template: Added WIF’s WSFederationAuthenticationModule and SessionAuthenticationModule to the modules section. Add a WIF configuration section to configure the trust with ADFS. Added a new authorization attribute. This attribute will go on controller that demand ADFS (or STS in general) authentication. The attribute logic is quite simple – it checks for authenticated users – and additionally that the authentication type is set to Federation. If that’s the case all is good, if not, the redirect to the STS will be triggered. public class RequireTokenAuthenticationAttribute : AuthorizeAttribute {     protected override bool AuthorizeCore(HttpContextBase httpContext)     {         if (httpContext.User.Identity.IsAuthenticated &&             httpContext.User.Identity.AuthenticationType.Equals( WIF.AuthenticationTypes.Federation, StringComparison.OrdinalIgnoreCase))         {             return true;         }                     return false;     }     protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)     {                    // do the redirect to the STS         var message = FederatedAuthentication.WSFederationAuthenticationModule.CreateSignInRequest( "passive", filterContext.HttpContext.Request.RawUrl, false);         filterContext.Result = new RedirectResult(message.RequestUrl);     } } That’s it ;) If you want to know why this works (and a possible gotcha) – read my next post.

    Read the article

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Gauging Maturity of your BPM Strategy - part 1 / 2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this post I will discuss the essence of maturity assessment and the business imperative for doing the same in the context of BPM. Social psychology purports that an individual progresses from being a beginner to an expert in a given activity or task along four stages of self-awareness: Unconscious Incompetence where the individual does not understand or know how to do something and does not necessarily recognize the deficit and may even deny the usefulness of the skill. Conscious Incompetence where the individual recognizes the deficit, as well as the value of a new skill in addressing the deficit. Conscious Competence where the individual understands or knows how to do something but demonstrating the skill requires explicit concentration. Unconscious Competence where the individual has had so much practice with a skill that it has become "second nature" and serves as a basis of developing other complementary skills. We can extend the above thinking to an organization as a whole by measuring an organization’s level of competence in a specific area or capability, as an aggregate of the competence levels of individuals it is comprised of. After all organizations too like individuals, evolve through experience, develop “memory” and capabilities that are shaped through a constant cycle of learning, un-learning and re-learning. Hence the key to organizational success lies in developing these capabilities to enable execution of its strategy in-line with the external environment i.e. demand, competition, economy etc. However developing a capability merits establishing a base line in order to Assess the magnitude of improvement from past investments Identify gaps and short-comings Prioritize future investments in the right areas A maturity assessment is essentially an organizational self-awareness check that is aimed at depicting the “as-is” snapshot of an existing capability in-order to guide future investments to develop that capability in-line with business goals. This effectively is the essence of a maturity Organizational capabilities stem through its architecture, routines, culture and intellectual resources that are implicitly and explicitly embedded in its business processes. Given that business processes underpin realization of organizational capabilities, is what has prompted business transformation and process management efforts. Thus, the BPM capability of an organization needs to be measured on an on-going basis to ensure delivery of its planned benefits. In my next post I will describe Oracle’s BPM Maturity assessment methodology.

    Read the article

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • The Solution

    - by Patrick Liekhus
    So I recently attended a class about time management as well as read the book “The Seven Habits of Highly Effective People” by Stephen Covey.  Both have been instrumental in helping me get my priorities aligned as well as keep me focused. The reason I bring this up is that it gave me a great idea for a small application with which to create a great technical stack solution that would be easy to demo and explain.  Therefore, the project from this point forward with be the Liekhus.TimeTracker application which will bring some the time management skills that I have acquired into a technical implementation.  The idea is rather simple, but leverages some of the basic principles of Covey along with some of the worksheets that I garnered from class.  The basics are as such: 1) a plan is a must have and 2) write it down!  A plan not written down is just an idea.  How many times have you had an idea that didn’t materialize?  Exactly.  Hence why I am writing it all down now! The worksheet consists of a few simple columns that I will outline below as well as some modifications that I made according to the Covey habits.  The worksheet looks like the following: Status Issue Area CQ Notes P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   The idea is really simple and straightforward; you write down all your tasks and keep track of them along the way.  The status stands for (P)ending, (F)inished or (L)ater.  You write a quick title for the issue and select the CQ (Covey Quadrant) with which the issue occurs.  The notes section is for things that happen while you are working through the issue.  And last, but not least, is the Area column that I added as a way to identify the Role or Area of your life that this task falls within based upon Covey’s teachings. The second part of this application is a simple phone log that allows you to track your phone conversations throughout the day.  All of this is currently done on a sheet of paper, but being involved in technology, I want it to have bells and whistles.  Therefore, this is my simple idea for a project that will allow me to test my theories about coding and implementations.  Stay tuned as the next session will be flushing out the concept and coming up with user stories to begin the SCRUM process. Thanks

    Read the article

  • Seven Random Thoughts on JavaOne

    - by HecklerMark
    As most people reading this blog may know, last week was JavaOne. There are a lot of summary/recap articles popping up now, and while I didn't want to just "add to pile", I did want to share a few observations. Disclaimer: I am an Oracle employee, but most of these observations are either externally verifiable or based upon a collection of opinions from Oracle and non-Oracle attendees alike. Anyway, here are a few take-aways: The Java ecosystem is alive and well, with a breadth and depth that is impossible to adequately describe in a short post...or a long post, for that matter. If there is any one area within the Java language or JVM that you would like to - or need to - know more about, it's well-represented at J1. While there are several IDEs that are used to great effect by the developer community, NetBeans is on a roll. I lost count how many sessions mentioned or used NetBeans, but it was by far the dominant IDE in use at J1. As a recent re-convert to NetBeans, I wasn't surprised others liked it so well, only how many. OpenJDK, OpenJFX, etc. Many developers were understandably concerned with the change of sponsorship/leadership when Java creator and longtime steward Sun Microsystems was acquired by Oracle. The read I got from attendees regarding Oracle's stewardship was almost universally positive, and the push for "openness" is deep and wide within the current Java environs. Few would probably have imagined it to be this good, this soon. Someone observed that "Larry (Ellison) is competitive, and he wants to be the best...so if he wants to have a community, it will be the best community on the planet." Like any company, Oracle is bound to make missteps, but leadership seems to be striking an excellent balance between embracing open efforts and innovating in competitive paid offerings. JavaFX (2.x) isn't perfect or comprehensive, but a great many people (myself included) see great potential, are developing for it, and are really excited about where it is and where it may be headed. This is another part of the Java ecosystem that has impressive depth for being so new (JavaFX 1.x aside). If you haven't kicked the tires yet, give it a try! You'll be surprised at how capable and versatile it is, and you'll probably catch yourself smiling while coding again.  :-) JavaEE is everywhere. Not exactly a newsflash, but there is a lot of buzz around EE still/again/anew. Sessions ranged from updated component specs/technologies to Websockets/HTML5, from frameworks to profiles and application servers. Programming "server-side" Java isn't confined to the server (as you no doubt realize), and if you still consider JavaEE a cumbersome beast, you clearly haven't been using the last couple of versions. Download GlassFish or the WebLogic Zip distro (or another JavaEE 6 implementation) and treat yourself. JavaOne is not inexpensive, but to paraphrase an old saying, "If you think that's expensive, you should try ignorance." :-) I suppose it's possible to attend J1 and learn nothing, but you'd have to really work at it! Attending even a single session is bound to expand your horizons and make you approach your code, your problem domain, differently...even if it's a session about something you already know quite well. The various presenters offer vastly different perspectives and challenge you to re-think your own approach(es). And finally, if you think the scheduled sessions are great - and make no mistake, most are clearly outstanding - wait until you see what you pick up from what I like to call the "hallway sessions". Between the presentations, people freely mingle in the hallways, go to lunch and dinner together, and talk. And talk. And talk. Ideas flow freely, sparking other ideas and the "crowdsourcing" of knowledge in a way that is hard to imagine outside of a conference of this magnitude. Consider this the "GO" part of a "BOGO" (Buy One, Get One) offer: you buy the ticket to the "structured" part of JavaOne and get the hallway sessions at no additional charge. They're really that good. If you weren't able to make it to JavaOne this year, you can still watch/listen to the sessions online by visiting the JavaOne course catalog and clicking the media link(s) in the right column - another demonstration of Oracle's commitment to the Java community. But make plans to be there next year to get the full benefit! You'll be glad you did. All the best,Mark P.S. - I didn't mention several other exciting developments in areas like the embedded space and the "internet of things" (M2M), robotics, optimization, and the cloud (among others), but I think you get the idea. JavaOne == brainExpansion;  Hope to see you there next year!

    Read the article

  • Using jQuery to Make a Field Read Only in SharePoint

    - by Mark Rackley
    Okay… this will be my shortest blog post EVER. Very little rambling.. I promise, and I’m sure this has been blogged more than once, so I apologize for adding to the noise, but like I always say, I blog for myself so I have a global bookmark. So,let’s say you have a field on a SharePoint Form and you want to make it read only. You COULD just open it up in SPD and easily make it read only, but some people are purists and don’t like use SPD or modify the default new/edit/disp forms. Put me in the latter camp, I try to avoid modifying these forms and it seemed like such a simple task that I didn’t want to create a new un-ghosted form.  So.. how do you do it? It’s only one line of jQuery. All you need to do is find the id for your input field and capture the keypress on it so that it cannot be modified (you should probably capture clicks for dropdowns/checkboxes/etc. but I didn’t need to).  Anyway, here’s the entire script: <script src="jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> jQuery(document).ready(function($){ //capture keypress on our read only field and return false $('#idOfInputField').keypress(function() { return false; }); }) </script>   You can find the ID of your input field by viewing the source, this ID stays consistent as long as you don’t muck with the list or form in the wrong way.  Please note, you CANNOT disable the input field as an alternative to capturing the keypress. If you do this and save the form, any data in the disabled fields will be wiped out. There are probably a dozen ways to make a field read-only and if all you are using jQuery for is to make a field read-only, then you might want to question your use of adding the overhead (although it’s really not that much). Hey.. it’s another tool for your tool belt.

    Read the article

  • changing drive nodes & hdparm

    - by Kalamalka Kid
    I am currently attempting to create a command that works at startup to kill the power on two of my very noisy hard drives. I have edited the etc/rc.local file to include this command: sudo hdparm -y /dev/sdc sudo hdparm -y /dev/sdd exit 0 While I think this should work, it seems the allocated drives keep switching around every time I reboot. I have sda, sdb, sdc, sdd, and sde but they keep getting jumbled around (making the drive I wish to shut different than sdd which is making the task of shutting down the right drive on start-up quite cumbersome. I had a perfectly functioning ftstab file working which disappeard, but I restored it from the back up into the etc/ dir: # <file system> <mount point> <type> <options> <dump> <pass> #Entry for /dev/sda1 : UUID=43c09daf-08a5-44f2-89b0-fc7c6f0d1e67 / ext4 errors=remount-ro 0 1 #Entry for /dev/sdd1 : UUID=443AFBAD7FE50945 /media/DX100 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=FCE456F5E456B21E /media/GalaxyM83 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdf1 : UUID=1CA057FDA057DBB8 /media/Holideck ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdc1 : UUID=7ABB49654B799D40 /media/JX3P ntfs defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 it seems every time I boot the order of the drives changes. I do not know how to resolve this. A quick workaround the problem was to go with UUID instead of the DEV letter by editing the etc/rc.local file to include: hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 So I thought I was in the clear, as I heard both hard drives die down during the boot sequence, BUT, as soon as I log in both drives start up again! so now I have to figure out what is making them start up again after log in, or perhaps another way to get them to turn off. Is there some kind of command i can get to execute after log in? I tried editing the startup applications to include an autossh with: autoshh - sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 autoshh - sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 but this did not seem to work to turn off the disks after log in.

    Read the article

  • WAIT-VHUB ? Whats Going On ?

    - by Neeraj Gupta
    I know many of you have been working on Oracle's Exalogic and other Engineered Systems. With partitions enabled now, things have gone multi dimension. But its fun. Isn't it ? While you have some EoIB configurations together with InfiniBand partitions, the VNICs are not coming up and staying in WAIT-VHUB state ?  Chances are that you have forgot to add InfiniBand Gateway switches' Bridge-X port GUIDs to your partition. These must be added as FULL members for EoIB to work properly. VHUB means a virtual hub in EoIB. Bridge-x is the access point for hosts to work over EoIB so thats why it must be a full member in partition. Step 1: Find out the port GUIDs of your bridge-x devices in IB Gateway switch. # showgwports INTERNAL PORTS: --------------- Device   Port Portname  PeerPort PortGUID           LID    IBState  GWState --------------------------------------------------------------------------- Bridge-0  1   Bridge-0-1    4    0x0010e00c1b60c001 0x0002 Active   Up Bridge-0  2   Bridge-0-2    3    0x0010e00c1b60c002 0x0006 Active   Up Bridge-1  1   Bridge-1-1    2    0x0010e00c1b60c041 0x0026 Active   Up Bridge-1  2   Bridge-1-2    1    0x0010e00c1b60c042 0x002a Active   Up Step 2: Add these port GUIDs to the IB partition associated with EoIB. Login to master SM switch for this task. # smpartition start # smpartition add -pkey <PKey> -port <port GUID> -m full # smpartition commit Enjoy ! 

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • How to force a clock update using ntp?

    - by ysap
    I am running Ubuntu on an ARM based embedded system that lacks a battery backed RTC. The wake-up time is somewhere during 1970. Thus, I use the NTP service to update the time to the current time. I added the following line to /etc/rc.local file: sudo ntpdate -s time.nist.gov However, after startup, it still takes a couple of minutes until the time is updated, during which period I cannot work effectively with tar and make. How can I force a clock update at any given time? UPDATE 1: The following (thanks to Eric and Stephan) works fine from command line, but fails to update the clock when put in /etc/rc.local: $ date ; sudo service ntp stop ; sudo ntpdate -s time.nist.gov ; sudo service ntp start ; date Thu Jan 1 00:00:58 UTC 1970 * Stopping NTP server ntpd [ OK ] * Starting NTP server [ OK ] Thu Feb 14 18:52:21 UTC 2013 What am I doing wrong? UPDATE 2: I tried following the few suggestions that came in response to the 1st update, but nothing seems to actually do the job as required. Here's what I tried: Replace the server to us.pool.ntp.org Use explicit paths to the programs Remove the ntp service altogether and leave just sudo ntpdate ... in rc.local Remove the sudo from the above command in rc.local Using the above, the machine still starts at 1970. However, when doing this from command line once logged in (via ssh), the clock gets updated as soon as I invoke ntpdate. Last thing I did was to remove that from rc.local and place a call to ntpdate in my .bashrc file. This does update the clock as expected, and I get the true current time once the command prompt is available. However, this means that if the machine is turned on and no user is logged in, then the time never gets updates. I can, of course, reinstall the ntp service so at least the clock is updated within a few minutes from startup, but then we're back at square 1. So, is there a reason why placing the ntpdate command in rc.local does not perform the required task, while doing so in .bashrc works fine?

    Read the article

  • links for 2010-12-22

    - by Bob Rhubart
    @hajonormann: BPM: Top Seven Architectural Topics in 2010 Oracle ACE Director Hajo Normann offers details on how to design a BPM/SOA solution including: modeling human interaction, improving BPM models, orchestrating composed services, central task management, new approaches for business-IT alignment, solutions for non-deterministic processes, and choreography. (tags: oracle otn soasymposium infoq soa bpm) InfoQ: Simplicity, The Way of the Unusual Architect Dan North talks about the tendency developers-becoming-architects have to create bigger and more complex systems. Without trying to be simplistic, North argues for simplicity, offering strategies to extract the simple essence from complex situations. (tags: ping.fm) Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV (Wim Coekaerts Blog) "One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) Oracle VM VirtualBox 4.0.0 Released! (Oracle's Virtualization Blog) And you were worried about what to get that special someone for Christmas... (tags: oracle otn virtualization virtualbox) Virtual Developer Day: Oracle WebLogic Server & Java EE (#OTNVDD) (Oracle Technology Network Blog (aka TechBlog)) "Virtual Developer Day is back with a vengeance! On Feb. 1, login to learn how Oracle WebLogic Server enables a whole new level of productivity for enterprise developers." Registration is open. (tags: oracle otn events webinar java) New Coherence 3.6 Oracle University Course (Cristóbal Soto's Blog) Cristóbal Soto shares information on the "Oracle Coherence 3.6: Share and Manage Data in Clusters" course now available through Oracle University. (tags: oracle otn grid coherence) The Aquarium: Oracle WebLogic Server & Java EE developer day "Oracle WebLogic is well on its way to contribute to the general Java EE 6 momentum and the OTN Blog has just announced a Virtual Developer Day for Oracle WebLogic." (tags: oracle otn weblogic java) Enterprise 2.0 Use Cases for Semantic Web (Reiser 2.0) "How can an enterprise improve the efficiency and effectiveness of their Knowledge and Community model leveraging semantic technologies and social networking dynamics?" - Peter Reiser (tags: oracle otn enterprise2.0 semanticweb) John Gøtze: European Interoperability Framework 2.0 "This week, the European Commission announced an updated interoperability policy in the EU. The Commission has committed itself to adopt a Communication that introduces the European Interoperability Strategy (EIS) and an update to the European Interoperability Framework (EIF)..." - John Gøtze (tags: entarch Interoperability) Andy Mulholland: Maybe Web 3.0 is quite understandable – and a natural result "The idea of Web 1.0 = content, Web 2.0 = people and Web 3.0 = services has a nice symmetrical feel to it, in fact it feels basically right as such a definition would include the two other major definitions as well. So if we put these things all together what picture do we see?" - Andy Mulholland (tags: web2.0 web3.0) Ken Downs: A Working Definition of Business Logic, with Implications for CRUD Code "The Wikipedia entry on 'Business Logic' has a wonderfully honest opening sentence stating that 'Business logic, or domain logic, is a non-technical term...'"  (tags: businesslogic crud)

    Read the article

  • Oracle Service Cloud May 2014 Release – Focus on your driving by JP Saunders

    - by Tuula Fai
    The next time you’re twiddling dials on your car’s dashboard to get the air to blow in the right direction, and the right song to play on the stereo, while pulling on the wires to charge your phone and punching in passwords to re-sync your hands-free headset to take a call, consider this… Does having a better dashboard UI in your car improve your driving performance? The Tesla car has one of the most modern and intuitive dashboards in any commercial car today. It is actually based on the design of a smart phone, which can download apps and updates directly from the cloud.  The 17” touchscreen, Lynx-based dashboard totally integrates all channels and devices, allowing the driver to focus on the smooth driving and power of this luxury (toy) car.  What the folks at Tesla didn't do was avoid the complexity of our needs. Instead, they streamlined them. And, while we might not all be able to afford a Tesla, their approach demonstrates that a modern UI approach can ultimately make a positive difference in our lives and businesses.  This is why the productivity and effectiveness of a Modern Contact Center is many times greater than that of a traditional contact center. Agents in a Modern Contact Center get to focus on the task at hand, the customer engagement, rather than stumbling their way through Lego blocks of complexity.  The Oracle Service Cloud is a modern approach to customer service that empowers your agents to achieve greater focus on improving your operational and strategic success through streamlined business processes.  Here are some of the recent May 2014 release highlights to the Oracle Service Cloud: Performance Enhanced Desktop UI A modern agent desktop interface that optimizes clumsy tasks, logins, screens and workflows and is optimized for agent and system performance. Improvements include performance for drag-and-drop configurable views, saved searches, and improved caching for high-speed performance even during disconnected or slow internet access.  Customer Experience Routing A streamlined automatic way to connect the right customer need to the best agent skills, based on multidimensional variables such as product skills, language skills, workload, call volume to optimize the connection and resolution experience. On-The-Go Mobile Improvements to the Agent mobile app that extend connectivity to websites, and customer surveys that are mobile-ready and rendered for any device, and ensure the customer’s voice is captured while the insight is still top of mind.  Infused Social Engagement Enhancements to infused social capabilities allow agents to respond in social threads directly from within the agent desktop, with the information becoming part of the incident record for automatic actions (such as replay or escalate) triggered off the response. Front-End Siebel Contact Center The market leading online Web Customer Self-Service interface from the Oracle Service Cloud, is now out-of-the-box ready for Oracle Siebel customers. Deploy a new online web self-service interface in a matter of weeks to have customers self-serve and self-solve answers, with escalated incidents routed directly into the Oracle Siebel Contact Center. For more information on the latest enhancements for the Oracle Service Cloud, please see the Oracle Service Cloud May 2014 Capabilities and Benefits. Related blogs: Oracle Service Cloud Feb 2014

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

  • Career Development: What should I learn next after Python? and Why? [closed]

    - by Josh
    Hi all I'm currently learning Python. I want to know what should I learn next out of these programming langauages: PHP Actionscript 3 Objective-C (iPhone applications) I work in the Multimedia industry and have decided to learn Python as a first programming language seriously because I would like to learn the basics of programming, to mainly write scripts at work that Automate task (eg. Edit multiple XML files quickly) At work we have a senior developer who knows Actionscript and PHP very well (although knows PHP better). We also have been developing iPhone applications for 2 weeks, Our senior developer could learn it although we have lots of work currently with PHP and Actionscript 3 type work and haven't had time or reason to pick up iOS development. Here are the reasons I want to learn each language, But I cannot decide what I'll learn next: PHP: I want to learn PHP because it will help with Web Development. PHP is very wanted by employers. Senior developer at work writes everything in it web sites, CMS etc. (including XML checks and scripts), I will learn a lot from him (once I learn the basics). However, I don't want to learn Web because you have to deal with lots of cross-browser problems. Actionscript 3: At work we are looking to put on another developer to help with online activities and very small games (using Actionscript 3.0 and Flash CS5) for (eg. First Aid Activities etc) I would like to do things that have a element of design as I'm better at Photoshop then developing. I want to be creative, I like to interact with users in a fun way. Objective-C (iPhone applications): We are a all mac office, we may get more iPhone, iPad application work(jobs) that need to be created. Work has found it nearly impossible to find good iPhone developers. I like apple products (Macs and iPhones), I would like to make my own games, applications in my spare time(if I knew how). Should I learn Actionscript first because it would be easier to learn then Objective-C? Should I learn PHP because it is very widely used? Should I learn Objective-C because it is really wanted by employers now?

    Read the article

  • Mobile Apps: An Ongoing Revolution

    - by Steve Walker
    a guest post from Suhas Uliyar, VP Mobile Strategy, Product Management, Oracle The rise of smartphone apps have proved transformational for businesses, increasing the productivity of employees while simultaneously creating some seriously cool end user experiences. But this is a revolution that is only just beginning. Over the next few years, apps will change everything about the way enterprises work as well as overhauling the experiences of customers. The spark for this revolution is simplicity. Simplicity has already proved important for the front-end of apps, which are now often as compelling and intuitive as consumer apps. Businesses will encourage this trend, both to further increase employee productivity and to attract ‘digital natives’ (as employees and customers). With the variety of front-end development tools available already, this should be a simple mission for developers to accomplish – but front-end simplicity alone is not enough for the enterprise mobile revolution. Without the right content even the most user-friendly app is useless. Yet when it comes to integrating apps with ‘back-end’ systems to enable this content, developers often face a complex, costly and time-consuming task. Then there is security: how can developers strike a balance between complying with enterprise security policies and keeping the user experience simple? Complexity has acted as a brake on innovation, with integration and security compliance swallowing enterprise resources. This is why the simplification of integration, security and scalability is so important: it frees time and money for revolutionary innovation. The key is to put in place a complete and unified SOA integration platform that runs across the entire enterprise and enables organizations to easily integrate and connect applications across IT environments. The platform must also be capable of abstracting apps from the underlying OS and enabling a ‘write-once, run- anywhere’ capability for mobile devices - essential for BYOD environments and integrating third-party apps. Mobile Back-end-as-a-Service can also be very important in streamlining back-end integration. Mobile services offered through the cloud can simplify mobile application development with a standard approach to dealing with complex server-side programming and integration issues. This allows the business to innovate at its own pace while providing developers with a choice of tools to speed development and integration. Finally, there is security, which must be done in a way that encourages users to make the most of their mobile devices and applications. As mobile users, we want convenience and that is why we generally approve of businesses that adopt BYOD policies. Enterprises can safely encourage BYOD as they can separate, protect, and wipe corporate applications by installing a secure ‘container’ around corporate applications on any mobile device. BYOD management also means users’ personal applications and data can be kept separate from the enterprise information – giving them the confidence they need to embrace the use of their devices for corporate apps. Enterprises that place mobility at the heart of what they do will fundamentally transform their businesses and leap ahead of the competition. As businesses take to mobile platforms that simplify integration, security and scalability we will see a blossoming of innovation that will drive new levels of user convenience and create new ways of working that we are only beginning to imagine.

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Wesquare (NL) helps major CG customer integrating Oracle Service Cloud (RightNow) with JDEdwards

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE When this well known, Italy based, CG player claimed that they needed a new CRM tool, Oracle partner WeSquare had a precise idea of what would be required, knowing that the customer was using JDEdwards as an ERP: they immediately thoughts about a solution that would help synchronizing the customer’s back-end system with the new CRM interface. The customer asked for presentations from three companies, including Oracle, and eventually selected Oracle Service Cloud (RightNow) with Alfa Sistemi (Oracle Platinum Partner) as a System Integrator supported by Wesquare (Oracle Gold partner specialized in RightNow). Synchronizing an On Premises ERP with a new SaaS based CRM platform could be seen as an uphill task, but WeSquare was determined, during the presales cycle, to prove that they had the skills and the attitude to make the difference. So, they rolled up their sleeves and got to it: five days of relentless work, missed lunches, and hours of brainstorming showed its result in the form of a new interface that works fabulously well with the JDEdwards ERP back-end and was successfully pitched by Oracle to the end-customer to win the deal! WeSquare took the occasion to learn that they can integrate Oracle Service Cloud (RightNow) with practically every other solution that a customer may run. As part of the project, WeSquare was also involved in different add-on’s development with the aim of enriching Oracle Service Cloud’s functionality. WeSquare is based in The Netherlands with an in-shore practice supported by off-shore teams in India. WeSquare can integrate and synchronize any application with RightNow. For more information, visit www.wesquare.nl or contact Wiebe Blankenberg (Managing Director) at +31 (0) 6 3632 1104 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • Emaroo 1.4.0 Released

    - by WeigeltRo
    Emaroo is a free utility for browsing most recently used (MRU) lists of various applications. Quickly open files, jump to their folder in Windows Explorer, copy their path - all with just a few keystrokes or mouse clicks. tl;dr: Emaroo 1.4.0 is out, go download it on www.roland-weigelt.de/emaroo   Why Emaroo? Let me give you a few examples. Let’s assume you have pinned Emaroo to the first spot on the task bar so you can start it by hitting Win+1. To start one of the most recently used Visual Studio solutions you type Win+1, [maybe arrow key down a few times], Enter This means that you can start the most recent solution simply by Win+1, Enter What else? If you want to open an Explorer window at the file location of the solution, you type Ctrl+E instead of Enter.   If you know that the solution contains “foo” in its name, you can type “foo” to filter the list. Because this is not a general purpose search like e.g. the Search charm, but instead operates only on the MRU list of a single application, you usually have to type only a few characters until you can press Enter or Ctrl+E.   Ctrl+C copies the file path of the selected MRU item, Ctrl+Shift+C copies the directory If you have several versions of Visual Studio installed, the context menu lets you open a solution in a higher version.   Using the context menu, you can open a Visual Studio solution in Blend. So far I have only mentioned Visual Studio, but Emaroo knows about other applications, too. It remembers the last application you used, you can change between applications with the left/right arrow or accelerator keys. Press F1 or click the Emaroo icon (the tab to the right) for a quick reference. Which applications does Emaroo know about? Emaroo knows the MRU lists of Visual Studio 2008/2010/2012/2013 Expression Blend 4, Blend for Visual Studio 2012, Blend for Visual Studio 2013 Microsoft Word 2007/2010/2013 Microsoft Excel 2007/2010/2013 Microsoft PowerPoint 2007/2010/2013 Photoshop CS6 IrfanView (most recently used directories) Windows Explorer (directories most recently typed into the address bar) Applications that are not installed aren’t shown, of course. Where can I download it? On the Emaroo website: www.roland-weigelt.de/emaroo Have fun!

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • Make the Taskbar Buttons Switch to the Last Active Window in Windows 7

    - by The Geek
    The new Windows 7 taskbar’s Aero Peek feature, with the live thumbnails of every window, is awesome… but sometimes you just want to be able to click the taskbar button and have the last open window show up instead. Here’s a quick hack to make it work better. To better understand the problem, imagine having nine windows of the same type open on your screen, but you are primarily working in just one of the windows at a time. So every time you want to switch back, you have to click the taskbar button, and then choose the one you are using from the list, which can be pretty annoying… Now if you know your Windows 7 shortcuts, you’d know that you can simply hold down the Ctrl key while clicking on the taskbar button, and the last window will show up. In fact, you can keep holding down the Ctrl key and keep clicking, and Windows will cycle through the open windows. It’s a useful shortcut, but hardly something you want to do every single time. Instead, we’ll use a quick registry hack to make the normal click switch to the last open window—if you still want to see the thumbnail list, just hover your mouse over the button for half a second to see the full list. Manual Registry Hack for Last Active Window Open up regedit.exe through the start menu search or run box, and then head down to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced Once you’re there, create a new 32-bit DWORD value on the right hand side, give it the name LastActiveClick, and set the value to 1. Once you are done, it should look something like this: Once you are done, you’ll have to log off and back on, or you can kill Explorer.exe through Task Manager and re-open it. Download the Registry Hack Instead Since you probably don’t feel like registry hacking, we’ve provided you an easy downloadable version. You can simply download the file, extract it, and then double-click on the LastActiveClick.reg file. Once you are done, you’ll have to log off and back on, just like with the manual registry hack. Download LastActiveClick Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Make the Windows 7 Taskbar Work More Like Windows XP or VistaStupid Geek Tricks: Select Multiple Windows on the TaskbarReorganize Your Taskbar Buttons and Tray Icons in XP/VistaKeyboard Ninja: Create a Hotkey to Switch to Your Open Outlook WindowTaskbar Eliminator Does What the Name Implies: Hides Your Windows Taskbar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >