Search Results

Search found 4516 results on 181 pages for 'vincent low'.

Page 102/181 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • New technical product guide for Sun Ray clients

    - by Jaap
    In the Oracle online documentation system a new Sun Ray clients Technical Product guide has been published. The document provides detailed information about the similarities and differences between the three Sun Ray client hardware models: Sun Ray 3, Sun Ray 3 plus and Sun Ray 3i. From the description of the Technical Product guide I want to quote the following section: "......Since Sun Ray 3 Series Clients have no local operating system and require no local management, they eliminate the complexity, expenses, and security vulnerabilities associated with other thin client and PC solutions. ......" This is always one of the great advantages of Sun Ray clients compared to other thin clients (which are actually low-fat PCs where you have to manage thin client OS images). The guide lists the features and technical specifications of the Sun Ray Client such as number of ports, chassis, graphics, network interfaces, power supply, operating conditions, MTBF, reliability, and other standards. The guide also contains a separate chapter about environmental data. As you may know, the Sun Ray 3 Series clients are designed specifically to be sensitive to a spectrum of environmental concerns and standards, from materials to manufacturing processes to shipping, operation, and end of life. The Sun Ray 3 Series clients complies to environmental standards and certifications such as Energy Star 5.0, EPEAT, WEEE and RoHS (see the Oracle policy for RoHS and REACH).

    Read the article

  • Optimizing MySQL -

    - by Josh
    I've been researching how to optimize MySQL a bit, but I still have a few questions. MySQL Primer Results http://pastie.org/private/lzjukl8wacxfjbjhge6vw Based on this, the first problem seems to be that the max_connections limit is too low. I had a similar problem with Apache initially, the max connection limit was set to 100, and the web server would frequently lock up and take an excruciatingly long time to deliver pages. Raising the connection limit to 512 fixed this issue, and I read that raising the connection limit on MySQL to match this was considered good practice. Being that MySQL has actually been "locking up" recently as well (connections have been refused entirely for a few minutes at a time at random intervals) I'm assuming this is the main cause of the issue. However, as far as table cache goes, I'm not sure what I should set this as. I've read that setting this too high can hinder performance further, so should I raise this to right around 551, 560, 600, or do something else? Lastly, as far as raising the join_buffer_size value goes, this doesn't even seem to be included in Debian's my.cnf file by default. Assuming there's not much I can do about adding indexes, should I look into raising this? Any suggested values? Any suggestions in general here would be appreciated as well. Edit: Here's the number of open tables the MySQL server is reporting. I believe this value is related to my question (Opened_tables: 22574)

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Penalty for collision during a racing game

    - by Arthur Wulf White
    In a racing game: How should we penalize the player for colliding head on into obstacles such as walls, trees and so on. What is the way it is done in your favorite racing game? How is it done in other successful racing games? Do you think temporarily disabling the engine for a second is too severe? If I do go that route, how would I convey the 'engine is disabled' to the player in a subtle and easily understood way? Is this 'too much' of a penalty? Would the slow-down from the collision be sufficient to discourage the player from driving too carelessly? Which one is more fun? Should I consider a health-bar and affect engine performance for 'low health' status? Could you offer examples of games that handle this well and one that do it poorly? Please share your experience with racing games obstacles and reference games you feel perform well in this aspect. I am sure we all enjoy our racing games differently and I would like to hear different opinions regarding this issue. I would also like to hear how you feel we should penalize or reward for colliding with other vehicles? Should enemy vehicles be destroyable? Should they slow down severely when they hit the back of your car or would that make the gameplay imbalanced?

    Read the article

  • Am I too young to burn out?

    - by Steve McMesse
    I feel like I have burned out, even though I am only out of college for 5 years. For the first 3 years of my career, things were going awesome. I was never anything special in school, but I felt special at my company. Looking back, I could tell that I made all the right moves: I actively tried to improve myself daily. I made a point of helping anyone I could. I made a point (and read books about) being a good team member. I had fun. After 3 years in a row as being rated as a top employee, I converted that political capital into choosing to work on an interesting, glamorous project with only 2 developers: me and a highly respected senior technical leader. I worked HARD on that project, and it came out a huge success. High in quality, low in bugs, no delays, etc. The senior tech lead got a major promotion and a GIGANTIC bonus. I got nothing. I was so disappointed that I just stopped caring. Over the last year, I have just kind of floated. During my first 4 years I felt energized after a 10 hour day. Now I can barely be bothered to work 6 hours a day. Any advice? I don't even know what I'm asking. I am just hoping smart people see this and drop me a few pieces of wisdom.

    Read the article

  • A better way to encourage contributions to OSS

    - by Daniel Cazzulino
    Currently in the .NET world, most OSS projects are available via a NuGet package. Users have a very easy path towards *using* the project right away. But let’s say they encounter some isssue (maybe a bug, maybe a potential improvement) with the library. At this point, going from user to contributor (of a fix, or a good bug repro or even a spike for a new feature) is a very steep and non trivial multi-step process of registering with some open source hosting site (codeplex, github, bitbucket, etc.), learning how to grab the latest sources, build the project, formulate a patch (or fork the code), learn the source control software they use (mercurial, git, svn, tfs), install whatever tools are needed for it, read about the contributors workflow for the project (do you fork &amp; send pull requests? do you just send a patch file? do you just send a snippet? a unit test? etc.), and on, and on, and on. Granted, you may be lucky and already know the source control system the project uses, but in really, I’d say the chances are pretty low. I believe most developers *using* OSS are far from familiar with them, much less with contributing back to various projects. We OSS devs like to be on the cutting edge all the time, ya’ know, always jumping on the new SCC system, the new hosting site, the new agile way of managing work items, bug tracking, code reviews, etc. etc. etc.. But most of our OSS users are largely the “... Read full article

    Read the article

  • A bounce-rate attack to manipulate SEO ?

    - by Denis Volovik
    This is a question to experienced people that might help us shed some light on the issue. We noticed a very strange behavior on our site, in Google Analytics. Some dude from Finland, namely, from Kouvola city is hitting one of our pages - only one page on our site, 'bout a hundred times per day, all with an average bounce rate of 90%+... This is causing our overall bounce rate to go up by 1 to 3% per day... which is very disturbing.. since we're trying to do our best in order to keep it as low as possible. And obviously having it jumped from ~24% to 27%, just because of that crazy dude is not making us happy at all... We tried implementing a geo-targeted script in order to catch this particular visitor and deliver him a juicy message, and it seemed like it helped in the beginning, it has stopped for a day or two, but now he's back... The geo-targeted script was also logging all IP addresses for page requests originating from Finland in order to find out more details and (in order to block them on the server level, later).. but thing is, it was all mainly cable or DSL connections with various, but not constantly repeating IPs... we are all wondering what is he up to really ? I think that this page should be kept updated with ideas on how to combat this and perhaps someone could also shed light on what it might be ? What is the reason for doing this "bounce-rate attack", as I call it? There was a similar question asked on stackoverflow earlier, with no meaningful answer - here - How to stop bounce rate manipulation.

    Read the article

  • Learning to be a good developer: what parts can you skip over?

    - by Andrew M
    I have set myself the goal of becoming a decent developer by this time next year. By this I mean full experience of the development 'lifecycle,' a few good apps/sites/webapps under my belt, and most importantly being able to work at a steady pace without getting sidelined for hours by some should-know-this-already technique. I'm not starting from scratch. I've written a lot of html/css, SQL, javascript, python and VB.net, and studied other languages like C and Java. I know about things like OOP, design patterns, TDD, complexity, computational linguistics, pointers/references, functional programming, and other academic/theoretical matters. It's just I can't say I've really done these things yet. So I want to get up to speed, and I want to know what things I can leave till a later date. For instance, studying algorithms and the maths behind them is interesting and all, but so far I've hardly needed to write anything but the most basic nested loops. Investigating Assembly to have a clearer picture of low-level operations would be cool... but I imagine rarely infringes on daily work. On the other hand, looking at a functional programming language might help me write programs that are more comprehensible and less prone to hidden failures (at the moment I'm finding the biggest difficulty is when the complexity of the app exceeds my capacity to understand it - for instance passing data around was fine... until I had to start doing it with AJAX, which was a painful step up). I could spend time working through case studies of design patterns, but I'm not sure how many of them get used in 'real life.' I'm a programmer with basic abilities - what skills should I focus on developing? (also my Unix skills are very weak, and also knowledge of Windows configuration... not sure how much time I should spend on that)

    Read the article

  • Looking for a comfortable Laptop Cooling Pad? Repurpose a pillow as a Laptop Cooling Pad

    - by Gopinath
    Update: This idea sucks as using a pillow blocks laptop cooling fans and air flow, which in turn would damage the laptop. Thanks Vijay I’ve a HP Pavilion laptop which turns hot quickly and most of the time I would not be able to keep it on my lap after 30 minutes of usage. It’s the same case with my DELL laptop and not to blame any specific brand or model. Most of the budget laptops generate lot of heat and tough to keep them on laps for a long time. They burn skin and the irritation sense leaves me with no option other than throwing them away. While searching for options to beat the heat I found Laptop Cooling Pads on Amazon.They attach to the base of laptops and act like a heat shield/sink to protect thighs from the heat generated by laptops. They are available from around $7 and goes up to $100 depending on the features they offer. After reading reviews I selected a trendy looking and comfortable laptop cooling pad and it was around $25 before shipping and taxes. I’m going to buy one of the cooling pads from Amazon. On a second thought I started searching for options to repurpose any of the house hold items as a laptop cooling pad and save money. The option suggested by wife is to repurpose an old pillow as a laptop cooling pad.  Here is my laptop cooling pad Wow! That is a nice suggestion which saved my thighs from laptop heat as well my wallet from spending $25. Even if I’ve to buy a new pillow I would be able to pickup cheap one from Wal-Mart store for as low as $2.  Also I find it is very comfortable to use a pillow as a Laptop Cooling Pad as they are flexible and automatically adjust to the shape of my body.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Developer Preview of Java SE 8 for ARM Now Available

    - by Tori Wieldt
    A Developer Preview of Java SE 8 including JavaFX (JDK 8) on Linux for ARM processors is now available for immediate download from Java.net. As Java Evangelist Stephen Chin says, "This is a great platform for doing small embedded projects, a low cost computing system for teaching, and great fun for hobbyists." This Developer Preview is provided to the community so that you can provide us with valuable feedback on the ongoing progress of the project. We wanted to get this release out to you as quickly as we can so you can start using this build of Java SE 8 on an ARM device, such as the Raspberry Pi (http://raspberrypi.org/). Download JDK 8 for ARM Read the documentation for this early access release Let Us Know What You Think!Use the Forums to share your stories, comments and questions. Java SE Snapshots: Project Feedback Forum  JavaFX Forum We are interested in both problems and success stories. If something does not work or behaves differently than what you expect, please check the list of known issues and if yours is not listed there, then report a bug at JIRA Bug Tracking System. More ResourcesJavaFX on Raspberry Pi – 3 Easy Steps by Stephen Chin OTN Tech Article: Getting Started with Java SE Embedded on the Raspberry Pi by Bill Courington and Gary Collins Java Magazine Article: Getting Started with Java SE for Embedded Devices on Raspberry Pi (Free subscription required) Video: Quickie Guide Getting Java Embedded Running on Raspberry Pi by Hinkmond Wong 

    Read the article

  • Flash AS3 sidescrolling tiles optimization

    - by Galvanize
    I'm trying to make a sidescrolling game in Flash that will run on a low performance laptop. While studying the subject from Tonypa I saw that he builds a Bitmap by making copys of the BitmapData of each tile from the Tile Sheet and placing it on the bigger Bitmat with the size of the screen. But when I came to think on how to scroll my map I ran into some optimization doubts. I came up with two choices: Create a MovieClip, place a Bitmap instance for each tile that is shown on the screen + 1 row in it, then move them all. Then when the tile ran off the screen I would move it to end of the MovieClip and replace their BitmapData for the next row in my map. Use a Bitmap with copys of each tile in it (as shown in Tonypa's tutorial) but 1 extra row, move the whole Bitmap, and when it comes the time to replace rows, redraw the whole Bitmap and move it back to the origin position. The first idea is how a co-worker of mine suggested, the second one is my own, but none of us has enough technical knowledge to be sure on a technique that would be optimal in performance, can anyone help?

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • xorg, nvidia, log-in all hosed - how can I completely reset graphics set-up/settings?

    - by Fred Hamilton
    I just did a fresh install of Mythbuntu 12.04.1 on my Intel MB with nVidia 9500GT graphics card. Hardware's been working great with 10.10 for about 2 years. Background: (optional - feel free to skip to question) I was trying to get my component video output to generate 720p, messing around with the nvidia drivers, and now the entire display system is hosed. I can SSH in and get a terminal. Depending on which nvidia package I install/remove, I get: Garbage on screen (after I "apt-get remove nvidia*") A low-res graphical log-in screen where I can log in as fred or guest. If I log in as fred, it displays some text mode status line then goes right back to the log-in screen. If I log in as guest, I actually get the full Ubuntu desktop, but I need to be able to log in as fred. Other times I get an error: "API mismatch: the NVIDIA kernel module has version 304.43, but this NVIDIA driver component has version 295.49." I've googled around, including trying this thread with the same error message, but to no effect. Question: How can I just reset x settings, drivers, everything display-related to the exact same way it was after a fresh install?

    Read the article

  • Scientists Demonstrate First-Person Shooter Games Improve Vision

    - by Jason Fitzpatrick
    Need an excuse to log a few more hours playing Call of Duty or Medal of Honor? Scientists demonstrated improved vision in test subjects after daily doses of first-person shooter games. Scientists at McMaster University took subjects who, as the result of surgery correcting congenital cataracts, had less than 20/20 vision. Subjects played Medal of Honor for a total of 40 hours over the course of 4 weeks before having their vision retested. The results? The CBC reports: The participants found improvements in detail, perception of motion and in low contrast settings. In essence, players could now read about one to one-and-a-half more lines on an optometrist’s eye chart. “We were thrilled,” Lewis said. “It’s very exciting to open up a new world of hope for these people.” How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • We Found 100 Manufacturing Heros That Focus on Innovation!

    - by Stephen Slade
    There’s a good piece written by Ann Grackin of ChainLink Research on the Manufacturing Leadership 100 Awards program held recently in Palm Beach Fl, Apr 30-May 3, 2012.  This article (link below) highlights the summary of the Summit with specific focus on manufacturing innovation.  There were several informative keynotes and sessions from industrial leaders who are leveraging the latest tools and technologies to make better decisions. Ann writes that she was a panelist with Cindy Reese, SVP, Worldwide Operations, Oracle; and Steven Tungate, VP/GM, Supply Chain & Innovation, Toshiba America Business Solutions about Factories and Supply Networks of the Future. Ann writes “So what are these manufacturers doing? Significant rationalization of the supply base (Toshiba, especially, has this issue since they have a long history of many acquisitions), streamlining production to increase productivity, and looking for lower-cost countries for manufacturing….  No doubt firms have global customer bases, so they need to be present in these markets. However, a low-cost-country manufacturing source does introduce more risk in the supply chain. And that was discussed. Quality, security, and intellectual property protection were the critical global manufacturing issues also discussed. “Cindy (Reese) told a fascinating story about Oracle’s acquisition of Sun and the supply chain that was subsequently created. Here was one of the key points: Although Oracle sells on a global basis, they now do their own factory-installed software. This keeps potential ‘factory-installed malware’ from getting into the servers at contract manufacturers, and prevents pirated software. In this way, Oracle ensures that they deliver the quality and security people expect”. Learn more about the Manufacturing Leadership 100 program from Manufacturing Executive at: http://www.mlsummit.com/ Full Article Link:  http://www.clresearch.com/research/detail.cfm?guid=52327213-3048-79ED-99D4-E433DA64D4F0

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • Beta Soon Closing: Java SE 7 Programmer I (OCA) Exam

    - by Harold Green
    Just a reminder that you still have the next several weeks to take the beta exam for the new "Oracle Certified Associate, Java SE 7 Programmer" certification. From now through December 16th, you can take the "Java SE 7 Programmer I" exam (1Z1-803) for only $50 USD. Not only that, but because this only a single-exam certification - passing it puts you among the very first certified on the new Java SE 7 platform! You'll be happy to note that we worked hard to raise the bar for OCA as we built the Java SE 7 certification. The content that we considered to be more ‘conceptual knowledge-based' has been eliminated in the OCA level and has been replaced with far more practical content - what we often call "practitioner-level" concepts and questions. In fact, some of the topics that we previously covered at the Oracle Certified Professional (OCP) level is now covered at the OCA level. Doing this not only increases the value of the Java SE 7 OCA certification, but also has provided the opportunity for us to broaden the topics, concepts, questions covered at the OCP certification level. All of this adds up to more value and credibility to those who get certified on Java SE 7. The OCA exam doesn’t have prerequisites. But it is very important that you carefully review the test objectives on the exam page and assess your current skills and knowledge against that list to be sure that you're ready. From the exam page you can register to take the exam at a Pearson VUE testing center near you.Below are some helpful details on the certification track and exam. Again, register now - just a few weeks left at the special low beta price! QUICK LINKS: Certification Track: Oracle Certified Associate (OCA), Java SE 7 Programmer Certification Exam: Java SE 7 Programmer I (1Z1-803) Video: Coming Soon - Java SE 7 Certification Info: About Beta Exams Exam Registration: Instructions | Register Here

    Read the article

  • What is a simple deformer in which vertices deform linearly with control points?

    - by sebf
    In my project I want to deform a complex mesh, using a simpler 'proxy' mesh. In effect, each vertex of the proxy/collision mesh will be a control point/bone, which should deform the vertices of the main mesh attached to it depending on weight, but where the weight is not dependant on the absolute distance from the control point but rather distance relative to the other affecting control points. The point of this is to preserve complex three dimensional features of the main mesh while using physics implementations which expect something far simpler, low resolution, single surface, etc. Therefore, the vertices must deform linearly with their respective weighted control points (i.e. no falloff fields or all the mesh features will end up collapsed) - as if each vertex was linked to a point on the plane created by the attached control points and deformed with it. I have tried implementing the weight computation algorithm in this paper (page 4) but it is not working as expected and I am wondering if it is really the best way to do what I want. What is the simplest way to 'skin'* an arbitrary mesh, to another arbitrary mesh? *By skin I mean I need an algorithm to determine the best control points for a vertex, and their weights.

    Read the article

  • I want to build a Virtual Machine, are there any good references?

    - by Michael Stum
    I'm looking to build a Virtual Machine as a platform independent way to run some game code (essentially scripting). The Virtual Machines that I'm aware of in games are rather old: Infocom's Z-Machine, LucasArts' SCUMM, id Software's Quake 3. As a .net Developer, I'm familiar with the CLR and looked into the CIL Instructions to get an overview of what you actually implement on a VM Level (vs. the language level). I've also dabbled a bit in 6502 Assembler during the last year. The thing is, now that I want¹ to implement one, I need to dig a bit deeper. I know that there are stack based and register based VMs, but I don't really know which one is better at what and if there are more or hybrid approaches. I need to deal with memory management, decide which low level types are part of the VM and need to understand why stuff like ldstr works the way it does. My only reference book (apart from the Z-Machine stuff) is the CLI Annotated Standard, but I wonder if there is a better, more general/fundamental lecture for VMs? Basically something like the Dragon Book, but for VMs? I'm aware of Donald Knuth's Art of Computer Programming which uses a register-based VM, but I'm not sure how applicable that series still is, especially since it's still unfinished? Clarification: The goal is to build a specialized VM. For example, Infocom's Z-Machine contains OpCodes for setting the Background Color or playing a sound. So I need to figure out how much goes into the VM as OpCodes vs. the compiler that takes a script (language TBD) and generates the bytecode from it, but for that I need to understand what I'm really doing. ¹ I know, modern technology would allow me to just interpret a high level scripting language on the fly. But where is the fun in that? :) It's also a bit hard to google because Virtual Machines is nowadays often associated with VMWare-type OS Virtualization...

    Read the article

  • Microsoft Researchers shows off best Touch Screen ever made. Better than Apple touch screens!

    - by Gopinath
    All the touch devices we have in market today like iPads, iPhones, Samsung tablets and phones, etc.  have a very small issue – 100 milliseconds of lag. The lag is the amount of time a touch device takes to respond after you touch the device. The 100 milliseconds of lag may not be an issue when you are tapping and swapping the interface elements on a device, but they are apparent when you wing your finger around the screen faster. For example if you use any painting app, the lag is very obvious and screen responds slowly than an artist can paint with his finger. Researchers at Microsoft labs came out with a prototype of touch device that drastically cuts down the 100 milliseconds of lag time to just 1 millisecond. That’s 100 times faster than today’s touch screen devices. Check out the video embedded below for a demo of new touch screen. Over at TechCrunch, Chris Velazco says: The difference is staggering, especially when Dietz trots out the slow-motion footage. With the delay between touch input and screen response slashed by orders of magnitude, a device that sports the sort of super-low-latency Dietz envisions has the potential to feel far more (for lack of a better term) natural than its brethren. There’s zero delay when you slide a checker across a board, for example, and bringing that sort of instantaneous feedback to the many screens in our lives could help to bridge the gap between operating a bit of software and the feeling of interacting with objects.   It will be great boost to Microsoft’s tablet strategy if they succeed in bringing this research into mass market and allow it’s partners to use the technology on Windows 8 tablets.

    Read the article

  • How can I make video games if I don't like programming?

    - by hoper
    I am studying C++ code in my school (my major is computer programming). Honestly, my grades are not so good, and assignments are really hard. Sometimes I feel sad that I will spend 8-10 hours per day coding (which is stressful) in the future for my job. But I still want to make video games. Maybe this is the only reason why I am taking all of these stressful courses. I always write down plots, stories, characters, fictional gaming worlds... Once, I thought I should study artistic technology such as game design and not computer technology such as C++, C#, etc. However, most of popular game designers (or directors) such as Kojima, Miyamoto, etc. used to be good programmers. Companies actaully assign programmers to directors because they understand how to make a game. I've try to find other colleges or universities where they teach game design programs. However, one article that lists rank 10 game design schools in North America seems untrustful because the survey company only scores it from intervews of students. Once, I tried to attend Art Institute of Vancouver which is rank 7 according to that article. However, one programmer who used to be an instructor in there told me the truth: the employement rate of graduated students is low. How can I have a future making games if I don't like programming?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >