Search Results

Search found 23127 results on 926 pages for 'based'.

Page 682/926 | < Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Live Virtual Class for Partners: Application Management

    - by Patrick Rood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} November 11-12th Manageability Partner Community Application Management Suite Live Virtual Training This training will be offered to Oracle Partners over a live webcast during business hours. Each day will consist of approximately 2-3 hours of lecture/demos. It will be recorded and available for playback. Purpose: This virtual course is a comprehensive program of training sessions, prepared and presented by Product Managers. This ensures you have all the information you need to position and sell Oracle Application Management Suites. The sessions will be lecture based with demonstrations to complement. These sessions are interactive and everyone will be required to participate. Customer case studies will be used as appropriate and there will be plenty of opportunity for in-depth discussion. Please bring to the training an understanding of what Enterprise Manager 12c does for our customers, along with your own experiences to date. Logistics: Topic: Oracle Application Management Suite Training (2 Days - approx 2-3 Hour per Day) WebEx session details to be provided upon registration. Monday 11th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) Tuesday 12th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) (Back to the top) Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Lock mouse in center of screen, and still use to move camera Unity

    - by Flotolk
    I am making a program from 1st person point of view. I would like the camera to be moved using the mouse, preferably using simple code, like from XNA var center = this.Window.ClientBounds; MouseState newState = Mouse.GetState(); if (Keyboard.GetState().IsKeyUp(Keys.Escape)) { Mouse.SetPosition((int)center.X, (int)center.Y); camera.Rotation -= (newState.X - center.X) * 0.005f; camera.UpDown += (newState.Y - center.Y) * 0.005f; } Is there any code that lets me do this in Unity, since Unity does not support XNA, I need a new library to use, and a new way to collect this input. this is also a little tougher, since I want one object to go up and down based on if you move it the mouse up and down, and another object to be the one turning left and right. I am also very concerned about clamping the mouse to the center of the screen, since you will be selecting items, and it is easiest to have a simple cross-hairs in the center of the screen for this purpose. Here is the code I am using to move right now: using UnityEngine; using System.Collections; [AddComponentMenu("Camera-Control/Mouse Look")] public class MouseLook : MonoBehaviour { public enum RotationAxes { MouseXAndY = 0, MouseX = 1, MouseY = 2 } public RotationAxes axes = RotationAxes.MouseXAndY; public float sensitivityX = 15F; public float sensitivityY = 15F; public float minimumX = -360F; public float maximumX = 360F; public float minimumY = -60F; public float maximumY = 60F; float rotationY = 0F; void Update () { if (axes == RotationAxes.MouseXAndY) { float rotationX = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivityX; rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0); } else if (axes == RotationAxes.MouseX) { transform.Rotate(0, Input.GetAxis("Mouse X") * sensitivityX, 0); } else { rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, transform.localEulerAngles.y, 0); } while (Input.GetKeyDown(KeyCode.Space) == true) { Screen.lockCursor = true; } } void Start () { // Make the rigid body not change rotation if (GetComponent<Rigidbody>()) GetComponent<Rigidbody>().freezeRotation = true; } } This code does everything except lock the mouse to the center of the screen. Screen.lockCursor = true; does not work though, since then the camera no longer moves, and the cursor does not allow you to click anything else either.

    Read the article

  • TechCast Live: "Java and Oracle, One Year Later" Replay Now Available

    - by Justin Kestelyn
    Earlier this week I had the opportunity to chat with Ajay Patel, Oracle's VP leading the Java Evangelist team, about "the state of the union" wrt Oracle and Java. Take a look: And here are some choice quotes, some paraphrased, as helpfully transcribed by Java evangelist Terrence Barr: "One key thing we have learned ... Java is not just a platform, it is also an ecosystem, and you can't have an ecosystem without a community." "The objectives, strategically [for Java at Oracle] have been pretty clear: How do we drive adoption, how do we build a larger, stronger developer community, how do we really make the platform much more competitive." "It's about transparency, involvement. IBM, RedHat, Apple have all agreed to working with us to make OpenJDK the best platform for open source development ... it is a sign that the community has been waiting to move the Java platform forward." "It's not just about Oracle anymore, it's about Java, the technology, the community, the developer base, and how we work with them to move the innovation forward." "Java is strategic to Oracle, and the community is strategic for Java to be successful ... it is critical to our business." On JavaFX 2.0: "... is coming to beta soon, with a release planned in second half [of 2011] ... will give you a new, high-performance graphics engine, the new API for JavaFX ... you will see a very strong, relevant platform for levering rich media platforms." On the JDK and SE: "... aggressively moving forward, JDK 7 is now code complete ... looking good for getting JDK 7 out by summer as we promised. Started work on JDK 8, Jigsaw and Lambda are moving along nicely, on track for JDK 8 release next year ... good progress." On Java EE and Glassfish: "... Very excited to have Glassfish 3.1 released, with clustering and management capabilities ... working with the JCP to shortly submit a number of JSRs for Java EE 7 ... You'll see Java EE 7 becoming the platform for cloud-based development." "You will see Oracle continue to step up to this role of Java steward, making sure that the language, the technology, the platform ... is competitive, relevant, and widely adopted." Making progress!

    Read the article

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • Thinkpad brightness steps error using FN+Home/End

    - by petermolnar
    I've met the following problem: normally my T400 (Lenovo Thinkpad) has 16 steps of brightness, and Windows utilizes it correctly. After a fresh install & minor tweaks Mint 12 (which is based on 11.10 Ubuntu) I only had 6 steps which was way to few. Listing /sys/class/backlight showed 3 entried. I removed the acpi-tools package, one of the disapperared - and I now have 10 steps! Therefore I think if I can reduce the entries to 1 I'm going to have 16 steps, since the stepping will be 1 instead of 2 (or 3). /sys/class/backlight/ intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-LVDS-1/intel_backlight thinkpad_screen -> ../../devices/virtual/backlight/thinkpad_screen The problem is that I'm unable to trace back what are the configs / daemons / kernel options triggers these two. More strangely, I discovered a strange behaviour. I monitored watch -n1 "cat /sys/class/backlight/thinkpad_screen/actual_brightness" and watch -n1 "cat /sys/class/backlight/intel_backlight/actual_brightness" while changing the brightness with FN+home/end combinations from max to min. The outcome is the following: brighness intel thinkpad --------- ----- -------- MAX 2408475 7 | 1955115 5 | 1435640 3 | 1246740 1 | 1086175 0 | 1010615 6 | 859495 4 | 689485 2 v 481695 0 MIN 217235 0 brighness intel thinkpad --------- ----- -------- MIN 217235 0 | 481695 2 | 689485 4 | 859495 6 | 1010615 7 | 1086175 1 | 1246740 3 | 1435640 5 v 1955115 7 MAX 2408475 0 When stepping from MIN to MAX, there's no difference between the last 2 steps. Also, the OSD icon (Cinnamon desktop, default theme) goes from full to min in 4 steps and from full to min once again in 4 steps. So... it seems that the intel entry is working correctly, showing correct values. The thinkpad entry however twists the things and even showing incorrect values. Does anyone have any idea how to get rid of the thinkpad entry? System data: Linux Mint 12 3.0.0-16 kernel Lenovo ThinkPad T400 Cinnamon 1.4 desktop For any additional info, please tell me what do you need. EDIT I'm sorry, I forgot to mention, I added acpi_backlight=vendor to GRUB cmdline as well, this is the result of the semi-better working than the default.

    Read the article

  • Calling home, receiving calls and smartphone data from the US

    - by Rob Farley
    I got asked about calling home from the US, by someone going to the PASS Summit. I found myself thinking “there should be a blog post about this”... The easiest way to phone home is Skype - no question. Use WiFi, and if you’re calling someone who has Skype on their phone at the other end, it’s free. Even if they don’t, it’s still pretty good price-wise. The PASS Summit conference centre has good WiFI, as do the hotels, and plenty of other places (like Starbucks). But if you’re used to having data all the time, particularly when you’re walking from one place to another, then you’ll want a sim card. This also lets you receive calls more easily, not just solving your data problem. You’ll need to make sure your phone isn’t locked to your local network – get that sorted before you leave. It’s no trouble to drop by a T-mobile or AT&T store and getting a prepaid sim. You can’t get one from the airport, but if the PASS Summit is your first stop, there’s a T-mobile store on 6th in Seattle between Pine & Pike, so you can see it from the Sheraton hotel if that’s where you’re staying. AT&T isn’t far away either. But – there’s an extra step that you should be aware of. If you talk to one of these US telcos, you’ll probably (hopefully I’m wrong, but this is how it was for me recently) be told that their prepaid sims don’t work in smartphones. And they’re right – the APN gets detected and stops the data from working. But luckily, Apple (and others) have provided information about how to change the APN, which has been used by a company based in New Zealand to let you get your phone working. Basically, you send your phone browser to http://unlockit.co.nz and follow the prompts. But do this from a WiFi place somewhere, because you won’t have data access until after you’ve sorted this out... Oh, and if you get a prepaid sim with “unlimited data”, you will still need to get a Data Feature for it. And just for the record – this is WAY easier if you’re going to the UK. I dropped into a T-mobile shop there, and bought a prepaid sim card for five quid, which gave me 250MB data and some (but not much) call credit. In Australia it’s even easier, because you can buy data-enabled sim cards that work in smartphones from the airport when you arrive. I think having access to data really helps you feel at home in a different place. It means you can pull up maps, see what your friends are doing, and more. Hopefully this post helps, but feel free to post comments with extra information if you have it. @rob_farley

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • Solita Oy Achieves Oracle PartnerNetwork Specialization

    - by michaela.seika(at)oracle.com
    Helsinki, February 2, 2011 - Solita Oy, a member of the Oracle® PartnerNetwork (OPN), is the first Finnish enterprise to achieve OPN Specialized status for customer-specific systems integration and software solutions.To achieve a Specialized status, Oracle partners are required to meet a stringent set of requirements that are based on the needs and priorities of the customer and partner community. By achieving a Specialized distinction, Solita Oy has been recognized by Oracle for its expertise in customer-specific systems integration and software solutions, achieved through competency development and demonstrated by the company's business results and proven success in implementing customer projects. "Solita and Oracle have cooperated for a long time, and we have been an Oracle partner for many years. We believe that the renewed partner program and the new partnership level that we have achieved will open up new opportunities for a closer collaboration with Oracle. Our increased focus on systems integration solutions and the stepping up of our specialized knowledge of SOA will enable us to provide even better solutions for our customers," said Jari Niska, Chief Executive Officer, Solita Oy. "Solita has shown trust and belief in Oracle's technology and in the business opportunities arising with it. They have contributed to building our cooperation in a consistent and systematic way. Achieving a Specialized status in our partner program is a natural further step in our close and committed cooperation. It strengthens our trust in our ability to be able to increase both turnover and profitability together," said Juha Kaskirinne, Alliances and Channel Leader, Oracle Finland Oy.  About Oracle PartnerNetwork Oracle PartnerNetwork (OPN) Specialized is the latest version of Oracle's partner program that provides partners with tools to better develop, sell and implement Oracle solutions. OPN Specialized offers resources to train and support specialized knowledge of Oracle products and solutions and has evolved to recognize Oracle's growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to differentiate through Specializations. Specializations are achieved through competency development, business results, expertise and proven success. To find out more visit http://www.oracle.com/partners or connect with the Oracle Partner community at OPN on Twitter, OPN on Facebook, OPN on LinkedIn, and OPN on YouTube. About Solita Oy Solita Oy is a Finnish company dedicated to developing demanding information system solutions and IT professional services. Solita's customers include prominent Finnish companies and public organizations. Solita's turnover in 2010 was about 17 million euros. The company was founded in 1996 and has over 170 employees. Further information: www.solita.fiFurther information Jari Niska, CEO, Solita Oy, tel. +358 40 524 6400, [email protected] Kaskirinne, A&C Leader Finland, Oracle Finland Oy, tel. +358 40 506 3592, [email protected]

    Read the article

  • WebLogic Partner Community Newsletter November 2012

    - by JuergenKress
    Dear WebLogic partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. If you have missed the Oracle OpenWorld WebLogic, Java and ExaLogic highlights - you can now watch our community webcast on-demand. To experience and learn more about WebLogic 12c, make sure you attend one of the upcoming WebLogic 12c bootcamps. We are continuously adding many more locations to our training road-show! If you like to suggest an additional location, Please feel free to write us @wlscommunity on twitter. The key presentations from Oracle OpenWorld 2012 are published at our WebLogic Community Workspace (WebLogic Community membership required): Exalogic X3-2 launch (.pptx) & ExaLogic references 2012 (ppt) & General Session Building and Managing a Private Oracle Java & Experiences building JavaEE based PaaS Platform Compressed presentation & Oracle Enterprise Manager 12c Cloud Control Demo (Zip) & Coherence Past Present And Future (ppt)& Coherence Web Elastic Data on WebLogic 12c (zip) & Oracle Tuxedo What’s New in 12c (.pptx) & Tuxedo Java Services(.pptx). One of the newest product in the middleware family ADF Mobile & ADF Essentials is now available. Andrejus published an article on how to implement ADF Essentials on Glassfish. When you design mobile solutions, you might want to make use of the Oracle Fusion Applications user experience design patterns. We continue to promote and create joint partner marketing campaigns to upgrade iAS to WebLogic, please contact myself if you are interested! Critical patch updates have been also released for iAs and the whole middleware stack, please make sure that you implement them. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsNovember2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • WebLogic Partner Community Newsletter November 2012

    - by JuergenKress
    Dear WebLogic partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. If you have missed the Oracle OpenWorld WebLogic, Java and ExaLogic highlights - you can now watch our community webcast on-demand. To experience and learn more about WebLogic 12c, make sure you attend one of the upcoming WebLogic 12c bootcamps. We are continuously adding many more locations to our training road-show! If you like to suggest an additional location, Please feel free to write us @wlscommunity on twitter. The key presentations from Oracle OpenWorld 2012 are published at our WebLogic Community Workspace (WebLogic Community membership required): Exalogic X3-2 launch (.pptx) & ExaLogic references 2012 (ppt) & General Session Building and Managing a Private Oracle Java & Experiences building JavaEE based PaaS Platform Compressed presentation & Oracle Enterprise Manager 12c Cloud Control Demo (Zip) & Coherence Past Present And Future (ppt)& Coherence Web Elastic Data on WebLogic 12c (zip) & Oracle Tuxedo What’s New in 12c (.pptx) & Tuxedo Java Services(.pptx). One of the newest product in the middleware family ADF Mobile & ADF Essentials is now available. Andrejus published an article on how to implement ADF Essentials on Glassfish. When you design mobile solutions, you might want to make use of the Oracle Fusion Applications user experience design patterns. We continue to promote and create joint partner marketing campaigns to upgrade iAS to WebLogic, please contact myself if you are interested! Critical patch updates have been also released for iAs and the whole middleware stack, please make sure that you implement them. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsNovember2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • WebCenter Customer Spotlight: Guizhou Power Grid Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGuizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. The business objectives were to consolidate information contained in disparate systems into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Guizhou Power Grid Company saved more than US$693,000 in storage costs, reduced  average search times from 180 seconds to 5 seconds and solved 80% to 90% of technology and maintenance issues by searching the Oracle WebCenter Content management system. Company OverviewA wholly owned subsidiary of China Southern Power Grid Company Limited, Guizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. Business ChallengesThe business objectives were to consolidate information contained in disparate systems, such as the customer relationship management and power grid management systems, into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Solution DeployedGuizhou Power Grid Company  implemented Oracle WebCenter Content to build a content management system that enabled the secure, integrated management and storage of information, such as documents, records, images, Web content, and digital assets. The content management solution was integrated with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site. Business Results Saved more than US$693,000 in storage costs and shortened the material distribution time by integrating the knowledge management solution with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site Enabled staff to search 31,650 documents using catalogs, multidimensional attributes, and knowledge maps, reducing average search times from 180 seconds to 5 seconds and saving approximately 1,539 hours in annual search time Gained comprehensive document management, format transformation, security, and auditing capabilities Enabled users to upload new documents and supervisors to check the accuracy of these documents online, resulting in improved information quality control Solved 80% to 90% of technology and maintenance issues by searching the Oracle content management system for information, ensuring IT staff can respond quickly to users’ technical problems Improved security by using role-based access controls to restrict access to confidential documents and information Supported the efficient classification of corporate knowledge by using Oracle’s metadata functions to collect, tag, and archive documents, images, Web content, and digital assets “We chose Oracle WebCenter Content, as it is an outstanding integrated content management platform. It has allowed us to establish a system to access, query, share, manage, and store our corporate assets. This has laid a solid foundation for Guizhou Power Grid Company to improve management practices.” Luo Sixi, Senior Information Consultant, Guizhou Power Grid Company Additional Information Guizhou Power Grid Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Windows Phone 7 Development Updates &ndash; March 8th 2011

    - by Nikita Polyakov
    Here are the latest update from the Windows Phone 7 Developer Worlds that went live this month. Here are some of the latest numbers: Windows Phone Marketplace currently offers more than 9,000 quality apps and games and enjoys a base of over 32,000 registered developers, delivering an average of 100 new apps every day. There have been over 1 million downloads of the developers tools for Windows Phone 7. Trial version help you sell more Trials result in higher sales by the numbers: Users like trials  - paid apps with trial functionality are downloaded 70 times more than paid apps that don’t Nearly 1 out of 10 trial apps downloaded convert to a purchase and generate 10 times more revenue on average than paid apps that don’t include trial functionality. Trial downloads convert to paid downloads quickly. More than half of trial downloads that convert to a sale do so within the 1st 24 hours of trial download, and mostly within 2 hours of trial download. Microsoft Ad Control is gaining traction By the numbers - ad supported Windows Phone 7 apps are: Roughly ¼ of all registered U.S. WP7 developers have downloaded the free Ad SDK for Silverlight and XNA Of ad funded apps, over 95 percent use the free Microsoft Advertising Ad Control Monthly impressions from our Ad Exchange has continued to grow by double digits – impressions increased by 376 percent since January Ad Control, the first wave of “How Do I” videos are now available on MSDN: Create an Ad in a Windows Phone 7 XNA Game App Register Ad-Enabled Windows Phone 7 Apps Measure Ad Performance of Windows Phone 7 Apps Boarder International App submission for Free Apps through Yalla Apps As of today you can start submitting your free applications in developer markets that are currently not covered by Microsoft. To submit your Free application if you DO NOT belong to one of the Marketplace supported countries, go to: Yalla Apps Marketplace Policy Updates: Free App Marketplace Submission upped to 100 and other news Microsoft has been revisiting a few of our Marketplace policies based on feedback from developers to reduce friction and cost, word for word: 1. We have raised the limit on the number of certifications that can be performed for FREE apps at no cost to the registered developer from five to 100. This was a common request from developers which we are glad to implement after building alternate methods to ensure that users can find and download high quality apps. 2. We have converted policy 5.6 - related to the inclusion of contact information for support - from a mandatory to an optional policy. This is still a strongly recommended best practice, but we recognized and responded to developer feedback that this policy was creating excessive drag on the certification process for developers without commensurate user benefit for all apps. 3. We also understand the desire for clarification with regard to our policy on applications distributed under open source licenses.  The Marketplace Application Provider Agreement (APA) already permits applications under the BSD, MIT, Apache Software License 2.0 and Microsoft Public License.  We plan to update the APA shortly to clarify that we also permit applications under the Eclipse Public License, the Mozilla Public License and other, similar licenses and we continue to explore the possibility of accommodating additional OSS licenses. Enjoy and happy coding! Official Blog Post for reference.

    Read the article

  • How to force a clock update using ntp?

    - by ysap
    I am running Ubuntu on an ARM based embedded system that lacks a battery backed RTC. The wake-up time is somewhere during 1970. Thus, I use the NTP service to update the time to the current time. I added the following line to /etc/rc.local file: sudo ntpdate -s time.nist.gov However, after startup, it still takes a couple of minutes until the time is updated, during which period I cannot work effectively with tar and make. How can I force a clock update at any given time? UPDATE 1: The following (thanks to Eric and Stephan) works fine from command line, but fails to update the clock when put in /etc/rc.local: $ date ; sudo service ntp stop ; sudo ntpdate -s time.nist.gov ; sudo service ntp start ; date Thu Jan 1 00:00:58 UTC 1970 * Stopping NTP server ntpd [ OK ] * Starting NTP server [ OK ] Thu Feb 14 18:52:21 UTC 2013 What am I doing wrong? UPDATE 2: I tried following the few suggestions that came in response to the 1st update, but nothing seems to actually do the job as required. Here's what I tried: Replace the server to us.pool.ntp.org Use explicit paths to the programs Remove the ntp service altogether and leave just sudo ntpdate ... in rc.local Remove the sudo from the above command in rc.local Using the above, the machine still starts at 1970. However, when doing this from command line once logged in (via ssh), the clock gets updated as soon as I invoke ntpdate. Last thing I did was to remove that from rc.local and place a call to ntpdate in my .bashrc file. This does update the clock as expected, and I get the true current time once the command prompt is available. However, this means that if the machine is turned on and no user is logged in, then the time never gets updates. I can, of course, reinstall the ntp service so at least the clock is updated within a few minutes from startup, but then we're back at square 1. So, is there a reason why placing the ntpdate command in rc.local does not perform the required task, while doing so in .bashrc works fine?

    Read the article

  • It's All In The Cloud

    - by Natalia Rachelson
    People turned out in droves for Steve Miranda's Apps Cloud General Session. Steve, as engaging as ever, covered our Apps strategy in the cloud and reinforced that Oracle has a complete set of cloud services including: •    Human Capital Management•    Talent Management•    Sales and Marketing•    Customer Service and Support•    Financial Management•    Procurement, Sourcing, and Inventory•    Project Portfolio Management•    Governance, Risk, and Compliance... all delivered on top of the Social, Platform, and Common Infrastructure.Steve talked about Fusion being the centerpiece of our Cloud Services. The fact that Fusion is 100 percent standards based is a big, big deal! In addition, our ERP Cloud Service is the most complete cloud service on the market. And email marketing is dead -- social marketing is where the action is. It's also where Oracle is investing heavily from a Sales & Marketing Cloud perspective. Steve covered the strategic acquisitions Oracle has made to enhance our organic Cloud offering. Specifically, Oracle bought RightNow to make our Customer Service and Support Cloud service complete. We also bought Taleo to add Recruiting and Learning capabilities to our Talent Management Cloud. Steve talked about our customers and how they are benefiting from the use of a variety of our Cloud Services. Red Robin is driving lower labor and food costs with Oracle ERP Cloud Service. He used Elizabeth Arden as the profile customer for HCM and Talent Management Service, UBS for HCM and Talent Management Service, and Brocade for Talent Management. All these customers are benefiting from a comprehensive and fully integrated HR platform that aligns compensation with performance and enhances workforce motivation and retention. At the same time, Hitachi Data Systems is using Oracle Taleo Performance Management Cloud to recruit the right competencies, pinpoint areas of improvement, and develop and monitor employee goals to support the global account organization. KLM and Overstock.com are gaining the benefits of Oracle's Customer Service and Support Service from RightNow by better engaging and serving customer needs online and through call centers. And last but not least, Graco and Key Energy are leveraging mobility features and sales forecasting and territory management capabilities within the Oracle Sales and Marketing Service. They expect to gain better visibility to sales information and drive more efficient sales campaigns and empower their sales force with data they need to make sales. Overall, Oracle Apps Cloud Services are enjoying a significant momentum in the marketplace. Steve projected an air of confidence and enthusiasm highlighting Oracle's latest successes with Cloud services.

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • Should Exterrnal USB hard drive auto mount Ubuntu 12.04.01 LTS

    - by Chris Good
    I want to have external USB hard drives automatically mounted when plugged in. I have 2 drives exactly the same except for volume label. They both have the same UUID. I want to be easily able to swap them as I'm using them for backups and want to keep 1 at home for off site backup. I've set up the /etc/fstab so they should mount at different places based on their volume label: /etc/fstab : LABEL=Passport1 /media/Passport1 ntfs defaults,windows_names,locale=en_US.utf8 0 0 LABEL=Passport2 /media/Passport2 ntfs defaults,windows_names,locale=en_US.utf8 0 0 blkid shows : ... /dev/sdc1: LABEL="Passport2" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" /dev/sdd1: LABEL="Passport1" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" They both mount automatically during reboot but do not mount when just plugged in to a running system. I've read lots of stuff about this, much of it is old so I'm not sure if it applies. I've read some stuff that says the mounts should happen automatically when plugged in, and lots of other stuff that says you have to install other software to make this happen, although much of it just seems to set up the fstab. What's the real story? Here is /var/log/syslog when drive is plugged in: Dec 14 11:22:58 ausyvutims1 kernel: [66221.300196] usb 1-1: new high-speed USB device number 6 using ehci_hcd Dec 14 11:22:58 ausyvutims1 mtp-probe: checking bus 1, device 6: "/sys/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1" Dec 14 11:22:58 ausyvutims1 mtp-probe: bus: 1, device: 6 was not an MTP device Dec 14 11:22:58 ausyvutims1 kernel: [66221.656020] scsi7 : usb-storage 1-1:1.0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.661534] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.666466] scsi 7:0:0:1: Enclosure WD SES Device 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667739] sd 7:0:0:0: Attached scsi generic sg3 type 0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667913] ses 7:0:0:1: Attached Enclosure device Dec 14 11:22:59 ausyvutims1 kernel: [66222.668047] ses 7:0:0:1: Attached scsi generic sg4 type 13 Dec 14 11:22:59 ausyvutims1 kernel: [66222.678473] sd 7:0:0:0: [sdc] 1953458176 512-byte logical blocks: (1.00 TB/931 GiB) Dec 14 11:22:59 ausyvutims1 kernel: [66222.687700] sd 7:0:0:0: [sdc] Write Protect is off Dec 14 11:22:59 ausyvutims1 kernel: [66222.687705] sd 7:0:0:0: [sdc] Mode Sense: 47 00 10 08 Dec 14 11:22:59 ausyvutims1 kernel: [66222.701076] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.701081] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.738062] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.738068] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.754558] sdc: sdc1 Dec 14 11:22:59 ausyvutims1 kernel: [66222.792006] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.792012] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.792016] sd 7:0:0:0: [sdc] Attached SCSI disk Dec 14 11:22:59 ausyvutims1 ata_id[16971]: HDIO_GET_IDENTITY failed for '/dev/sdc': Invalid argument Thanks for any help offered

    Read the article

  • What to use C++ for?

    - by futlib
    I really love C++. However, I'm struggling to find good uses for it lately. It is still the language to use if you're building huge systems with huge performance requirements. Like backend/infrastructure code at Google and Facebook, or high-end games. But I don't get to do stuff like that. It's also a good choice for code that runs close to the hardware. I'd like to do more low-level stuff, but it isn't part of my job, and I can't think of useful private projects that would involve that. Traditionally, C++ was also a good choice for rich client applications, but those are mostly written in C# and Obj-C lately - and aren't really that important anymore, with everything being a web app. Or a mobile app, which are mostly written in Obj-C and Java. And of course, web-based desktop and mobile apps are quite prominent, too. At my job, I work mostly on web applications, using Java, JavaScript and Groovy. Java is a good/popular choice for non-Google-scale backends, Groovy (or Python, or Ruby or Node.js) is pretty good for the server-side of web apps and JavaScript is the only real choice for the client-side. Even the little games I'm writing in my spare time are lately mostly written in JavaScript, so they can run in the browser. So what would you suggest I could use C++ for? I'm aware that this question is very similar. However, I don't want to learn C++, I was a professional C++ programmer for years. I want to keep doing it and find good new use cases for it. I know that I can use C++ for web apps/games. I could even compile C++ to JavaScript with Emscripten. However, it doesn't seem like a good idea. I'm looking for something C++ is really good at to stay competent in the language. If your answer is: Just give up and forget C++, you'll probably never need it again, so be it.

    Read the article

  • C-states and P-states : confounding factors for benchmarking

    - by Dave
    I was recently looking into a performance issue in the java.util.concurrent (JUC) fork-join pool framework related to particularly long latencies when trying to wake (unpark) threads in the pool. Eventually I tracked the issue down to the power & scaling governor and idle-state policies on x86. Briefly, P-states refer to the set of clock rates (speeds) at which a processor can run. C-states reflect the possible idle states. The deeper the C-state (higher numerical values) the less power the processor will draw, but the longer it takes the processor to respond and exit that sleep state on the next idle to non-idle transition. In some cases the latency can be worse than 100 microseconds. C0 is normal execution state, and P0 is "full speed" with higher Pn values reflecting reduced clock rates. C-states are P-states are orthogonal, although P-states only have meaning at C0. You could also think of the states as occupying a spectrum as follows : P0, P1, P2, Pn, C1, C2, ... Cn, where all the P-states are at C0. Our fork-join framework was calling unpark() to wake a thread from the pool, and that thread was being dispatched onto a processor at deep C-state, so we were observing rather impressive latencies between the time of the unpark and the time the thread actually resumed and was able to accept work. (I originally thought we were seeing situations where the wakee was preempting the waker, but that wasn't the case. I'll save that topic for a future blog entry). It's also worth pointing out that higher P-state values draw less power and there's usually some latency in ramping up the clock (P-states) in response to offered load. The issue of C-states and P-states isn't new and has been described at length elsewhere, but it may be new to Java programmers, adding a new confounding factor to benchmarking methodologies and procedures. To get stable results I'd recommend running at C0 and P0, particularly for server-side applications. As appropriate, disabling "turbo" mode may also be prudent. But it also makes sense to run with the system defaults to understand if your application exhibits any performance sensitivity to power management policies. The operating system power management sub-system typically control the P-state and C-states based on current and recent load. The scaling governor manages P-states. Operating systems often use adaptive policies that try to avoid deep C-states for some period if recent deep idle episodes proved to be very short and futile. This helps make the system more responsive under bursty or otherwise irregular load. But it also means the system is stateful and exhibits a memory effect, which can further complicate benchmarking. Forcing C0 + P0 should avoid this issue.

    Read the article

  • Best Creational Pattern for loggers in a multi-threaded system?

    - by Dipan Mehta
    This is a follow up question on my past questions : Concurrency pattern of logger in multithreaded application As suggested by others, I am putting this question separately. As the learning from the last question. In a multi-threaded environment, the logger should be made thread safe and probably asynchronous (where in messages are queued while a background thread does writing releasing the requesting object thread). The logger could be signleton or it can be a per-group logger which is a generalization of the above. Now, the question that arise is how does logger should be assigned to the object? There are two options I can think of: 1. Object requesting for the logger: Should each of the object call some global API such as get_logger()? Such an API returns "the" singleton or the group logger. However, I feel this involves assumption about the Application environment to implement the logger -which I think is some kind of coupling. If the same object needs to be used by other application - this new application also need to implement such a method. 2. Assign logger through some known API The other alternative approach is to create a kind of virtual class which is implemented by application based on App's own structure and assign the object sometime in the constructor. This is more generalized method. Unfortunately, when there are so many objects - and rather a tree of objects passing on the logger objects to each level is quite messy. My question is there a better way to do this? If you need to pick any one of the above, which approach is would you pick and why? Other questions remain open about how to configure them: How do objects' names or ID are assigned so that will be used for printing on the log messages (as the module names) How do these objects find the appropriate properties (such as log levels, and other such parameters) In the first approach, the central API needs to deal with all this varieties. In the second approach - there needs to be additional work. Hence, I want to understand from the real experience of people, as to how to write logger effectively in such an environment.

    Read the article

< Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >