Search Results

Search found 19090 results on 764 pages for 'greatest n per group'.

Page 204/764 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Limiting calls to WCF services from BizTalk

    - by IntegrationOverload
    ** WORK IN PROGRESS ** This is just a placeholder for the full article that is in progress. The problem My BTS solution was receiving thousands of messages at once. After processing by BTS I needed to send them on via one of several WCF services depending on the message content. The problem is that due to the asynchronous nature of BizTalk the WCF services were getting hammered and could not cope with the load. Note: It is possible to limit the SOAP calls in the BtsNtSvc.exe.Config file but that does not have the desired results for Net-TCP WCF services. The solution So I created a new MessageType for the messages in question and posted them to the BTS messaeg box. This schema included the URL they were being sent to as a promoted property. I then subscribed to the message type from a new orchestraton (that does just the WCF send) using the URL as a correlation ID. This created a singleton orchestraton that was instantiated when the first message hit the message box. It then waits for further messages with the same correlation ID and type and processs them one at a time using a loop shape with a timer (A pretty standard pattern for processing related messages) Image to go here This limits the number of calls to the individual WCF services to 1. Which is a good start but the service can handle more than that and I didn't want to create a bottleneck. So I then constructed the Correlation ID using the URL concatinated with a random number between 1 and 10. This makes 10 possible correlation IDs per URL and so 10 instances of the singleton Orchestration per WCF service. Just what I needed and the upper random number is a configuration value in SSO so I can change the maximum connections without touching the code.

    Read the article

  • Setting up a shared media drive

    - by Sam Brightman
    I want to have a shared media drive be transparently usable to all users, whilst also sticking to FHS and Ubuntu standards. The former takes priority if necessary. I currently mount it at /media/Stuff but /media is supposed to be for external media, I believe. The main issue is setting permissions so that access to read and write to the drive can be granted to multiple users working within the same directories. InstallingANewHardDrive seems both slightly confused and not what I want. It claims that this sets ownership for the top-level directory (despite the recursion flag): sudo chown -R USERNAME:USERNAME /media/mynewdrive And that this will let multiple users create files and sub-directories but only delete their own: sudo chgrp plugdev /media/mynewdrive sudo chmod g+w /media/mynewdrive sudo chmod +t /media/mynewdrive However, the group writeable bit does not seem to get inherited, which is troublesome for keeping things organised (prevents creation inside sub-folders originally made by another user). The sticky bit is probably also unwanted for the same reason, although currently it seems that one userA (perhaps the owner of the mount-point?) can delete the userB's files, but not vice-versa. This is fine, as long as userB can create files inside the directory of userA. So: What is the correct mount point? Is plugdev the correct group? Most importantly, how to set up permissions to maintain an organised media drive? I do not want to be running cron jobs to set permissions regularly!

    Read the article

  • Dropbox Doubles Referral Credit; Score 500MB for Each Friend You Refer

    - by Jason Fitzpatrick
    Dropbox is doubling the amount of free storage you get per-referral to 500MB, doubling the previous 250MB credit–better yet, the bonus is retroactive and applies to referrals you’ve already made. From the DropBox blog: How much space is that, exactly? For every friend you invite that installs Dropbox, you’ll both get 500 MB of free space. If you’ve got a free account, you can invite up to 32 people for a whopping total of 16 GB of extra space. Pro accounts now earn 1 GB per referral, for a total of 32 GB of extra space. Have you already invited a bunch of people? Don’t worry. Within a few days, you’ll get full credit for every referral that’s already been completed. Boom! Hit up the link below for the full announcement. Dropbox Referrals Now Twice As Nice [Dropbox] How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2 How to Own Your Own Website (Even If You Can’t Build One) Pt 1

    Read the article

  • Impacting the Future through Collaboration at Alliance 14

    - by Jeb Dasteel-Oracle
    We’re hearing good things about the Alliance 14 conference held in Las Vegas by the Higher Education Users Group (HEUG) back in March. For those of you who aren’t familiar with Alliance 14 conferences, they are global events dedicated to enhancing and educating its members and the world on how higher educational institutions can utilize Oracle applications to change how they do business. The HEUG is an all-volunteer organization made up of individuals who collaborate with Oracle as part of the evolving higher education industry. Conference participants network with peers from other institutions (regionally and globally) to share the challenges; discuss solutions and ideas, and collaborate on HEUG strategic initiatives. The HEUG enables each institution to be a part of the ever-changing Oracle landscape. Watch the video below and hear directly from the attendees about their experience with Oracle and how being part of the HEUG has allowed them to  collaborate with one of their most importance resources... and with each other. Oracle is committed to fostering a strong and independent network of user groups worldwide. Currently over 900+ groups provide dynamic forums for customers to share information, experiences and expertise. If you’re interested in more information or joining an Oracle User Group, click and become part of a vibrant network of engaged users finding the best ways to get the most value from their Oracle investment and collaborating to provide a unified feedback voice to Oracle. Catch you next time, Jeb

    Read the article

  • SOA performance on SPARC T5 benchmark results

    - by JuergenKress
    The brand NEW super fast SPARC T5 servers are available. The platform is superb to run large SOA Suite environments or to consolidate your whole middleware platform. Some performance advices, recommended for all workloads: Performance profile for SOA apps on Oracle Solaris 11 BPEL (Fusion Order Demo) instances per second OSB (messages / transformations per second) Crypto acceleration study for SOA transformations SPARC T4 and T5 platform testing, pre-tuning Performance suitable for mid-to-high range enterprise in stand-alone SOA deployment or virtualized consolidation environment shared with Oracle applications 2.2x to 5x faster than SPARC T3 servers 25% faster SOA throughput, core to core than Intel 5600-series servers (running Exalogic software) SPARC T5 has 2x the consolidation density of Intel 5600-class processors 2x faster initial deployment time using Optimized Solutions pre-tested configuration steps Over 200 Application adapters for easiest Oracle software integration Would you like to get details? We can share with you on 1:1 bases T5 SOA Suite performance benchmarks, please contact your local partner manager or myself! SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: T5,TS Sparc,T5 SOA,bechmark,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Oracle Customer Experience (CX) Solutions Make Retailers Merry

    - by Tuula Fai
    Tis the season to be jolly. If you’re a retailer, your level of jolliness depends on sales. So you watch trends like U.S. store traffic increasing 3.5% to 308 million on Black Friday but sales actually falling 1.8% to $11.2 billion. Fortunately, by the end of November, retail sales were up 3.7% over the previous year, thanks to life recovering after Hurricane Sandy. And online sales topped $1 billion for the first time ever! Who are the companies improving their sales online? They are big names like Walgreen’s Drugstore.com, Nordstrom’s HauteLook, and Intuit. More importantly, how are they doing it? They use cutting-edge business practices enabled by Oracle’s CX Cloud Service & Support solutions to: Increase conversions rates and order sizes (Customer Acquisition) Enhance customer satisfaction and loyalty (Customer Retention) Reduce contact center costs and improve agent productivity (Operational Efficiency). Acquisition + Retention + Operational Efficiency = Sustainable Growth and Profits. That’s the magic formula for retail customer service success. Don’t take our word for it. Look at the results of these Oracle customers: Walgreen’s Drugstore—30% sales conversion rate on chat sessions with 20% increase in shopping cart size Nordstrom’s HauteLook—40,000+ interactions per month—20% growth over last year— efficiently managed by 40 agents, with no increase in IT costs Intuit—50% increase in customer satisfaction and 70% decrease in cost per interaction Using Oracle’s CX Cloud & Service solutions, these retailers deliver consistent, relevant, and personalized experiences across all touchpoints, including social, mobile, and web. Their ability to connect with customers anytime, anywhere—providing the right answer at the right time—helps them create a defensible advantage in the marketplace. Want to learn more? Please visit http://www.oracle.com/goto/cloudlaunchpad for free resources on delivering exceptional customer service in the Cloud. Also, watch our YouTube channel to learn more about seamless multichannel retail and Winston Furnishings’ exceptional customer experience.

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

  • The clock problem - to if or not to if?

    - by trejder
    Let's say, we have a simple digital clock. To "power" it, we use a routine executed every second. We update seconds part in it. But, what about minutes and hours part? What is better / more professional / offers better performance: Ignore all checking and update hour, minute and seconds part each time, every second. Use if + a variable for checking, if 60 (or 3600) seconds passed and update minute / hour part only at that precise moments. This leads us to a question, what is better -- unnecessary drawings (first approach) or extra ifs? I've just spotted a Javascript digital clock, one of millions similar on one of billions pages. And I noticed that all three parts (hours, minutes and seconds) are updated every second, though first changes its value only once per 3600 seconds and second once per 60 seconds. I'm not to experienced developer, so I might me wrong. But everything, what I've learnt up until now, tells me, that if are far better then executing drawing / refreshing sequences only to draw the same content.

    Read the article

  • Save Actions in NetBeans IDE 7.3

    - by Geertjan
    Several developers, especially those familiar with equivalent functionality in Eclipse, have been asking for so-called "Save Actions", that is, support for actions that are automatically performed when a file is saved. Here's the related NetBeans issue: http://netbeans.org/bugzilla/show_bug.cgi?id=140719   In NetBeans IDE 7.3, the issue is resolved as follows: A new "On Save" tab is found in the "Editor" tab of the Options window. Defaults for all languages are set via the "All Languages" item in the drop-down. Here, for all languages, you can specify what kind (all, none, or only modified lines) of formatting and space removal will occur automatically when a file is saved: Via the drop-down, you see all the languages supported by the IDE: You can pick a language and then override the default On Save settings: Per language, there may be additional On Save settings. For example, for Java, you can specify that, when saving a Java file, unused import statements should be removed and/or the rules you've set for organizing import statements should be applied: There's also a set of new NetBeans IDE APIs for adding new On Save functionality via custom plugins. Via MIME type registration of OnSaveTask.Factory, you can register new On Save actions that will be run for files conforming to the relevant MIME type. There's also extensions via the Editor Options API for registering new panels (one per language) to the On Save panel in the Options window. I'll demonstrate some examples of the APIs in upcoming blog entries.

    Read the article

  • International Pricing of Software [closed]

    - by arachnode.net
    I operate a small company that charges $99 for a piece of software. I'd like to know what would be a fair price for non-US customers. Today I sold a license to a party in South Africa. He told me he had been watching the project for two years while business justification could be made for the purchase as SA's currency is nine times weaker than the US dollar. I found this resource detailing how much a Big Mac costs in various countries: http://howmuchatyourplace.com/how_much_does/Big%20Mac_cost.php I realize that the cost of producing a Big Mac varies from locale to locale as does the demand for one. I am aware that many software companies charge prices in local currencies that equate to the price in US dollars. I am aware that my costs remain fixed, and I obviously I cannot discount the rate at which my time costs me. I'm OK with earning less per sale as I would rather get my software onto the desktops of those that need it rather than having them try to write it themselves. Support is light and I can usually point a user to an existing blog or forum post. Being a resident of Hawaii, I am aware that certain goods and services cost more here. Power is up to six times as much per KWH as it is in, say, Seattle, and wages are approximately 60% of what they are for my profession (programmer). I'd like to offer my software at a price that would be fair for everyone around the globe. If a currency is 2 foreign units to 1 US dollar, and goods and services cost 50% more and pay for an equivalent job is 50% of what it is here, should I charge, say, $50 instead of $99? Is there a resource which would allow me to input a price in US dollars and adjust for a list of international locations?

    Read the article

  • About the new Microsoft Innovation Center (MIC) in Miami

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2014/08/21/about-the-new-microsoft-innovation-center-mic-in-miami.aspxLast night I attended a meeting at the new MIC in Miami, run by Blain Barton (@blainbar), Sr IT Pro Evangelist at Microsoft. The meeting was well attended and is meant to be run as a user group format in a casual setting. Many of the local Microsoft MVPs and group leaders were in attendance as well, which allows technical folks to connect with community leaders in the area. If you live in South Florida, I highly recommend to look out for future meetings at the MIC; most meetings will be about the Microsoft Azure platform, either IT Pro or Dev topics. For more information on the MIC, check out this announcement:  http://www.microsoft.com/en-us/news/press/2014/may14/05-02miamiinnovationpr.aspx. About Herve Roggero Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Password not working for sudo ("Authentication failure")

    - by Souta
    Before I mention anything further, DO NOT give me a response saying that terminal won't show password input. I'm AWARE of that. I'm typing my user password in (not a capslock issue), and for some reason it still says 'Authentication Failure'. Is there some other password (one I'm not aware of) I'm supposed to be using other than my user password? I've had this ubuntu before, on another hard drive and I didn't have this problem. (And it was the same ubuntu, ubuntu 12.04 LTS) ai@AiNekoYokai:~$ groups ai adm cdrom sudo dip plugdev lpadmin sambashare ai@AiNekoYokai:~$ lsb_release -rd Description: Ubuntu 12.04 LTS Release: 12.04 ai@AiNekoYokai:~$ pkexec cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d I can log in with my password, but it's not accepted as valid for authentication <-- That is pretty much my issue. (Although, I haven't gone into recovery mode.) I've ran: ai@AiNekoYokai:~$ ls /etc/sudoers.d README And also reinstalled sudo with: pkexec apt-get update pkexec apt-get --purge --reinstall install sudo pkexec usermod -a -G admin $USER <- Says admin does not exist su $USER <- worked for me, however, my password still does not do much (in sense of not working for other things) I changed my password with pkexec passwd $USER. I was able to change it no problem. gksudo xclock was something I was able to get into, no problem. (Clock showed) ai@AiNekoYokai:~$ gksudo xclock

    Read the article

  • Trade In, Trade Up Promotion: SPARC Consolidation Now Through May 31st

    - by swalker
    Dear Partner, Installed Base Business (IBB) technology refresh is one of the most important activities for Oracle, for you and for your customers. It allows your existing customers to benefit from the most up-to-date, best-of-breed Oracle products. And it’s an exciting time to perform a technology refresh: a new SPARC promotion is available now, closing 31st May 2012. Customers trading in older SPARC systems and upgrading to a new SPARC SuperCluster T4-4 or SPARC Enterprise M8000/M9000 can get $4,000 per CPU. Discount is pre-approved and upfront (maximum discounts apply). The major highlights are as follows: Targeted Systems: Upgrade to SPARC M8000, M9000, SuperCluster Qualified installed base upgrade from: All older-generations of SPARC systemsPromotional offer: Trade-in Value: $4K per CPU Pre-approved maximum discount (including trade-in) not to exceed 60% on M8/9000 systems and 25% on SuperCluster No-cost dock-to-dock shipping, and environmentally safe disposal of the returned hardware through Oracle best-of-class recycling processes. Recommendations: We recommend you to take the following actions: As usual, please register your opportunities in OMM When you do so, please make sure you place the following Campaign Names in the “Marketing Initiative” field of OMM: Campaign Name : EMEA_Tech Refresh-IBB Campaign_12H1_Follow Up_O For all the details: Please view rules, and FAQs. For more information, please visit the Promo Partner Site here. For more information on IBB and the Oracle Upgrade Advantage Program (UAP):http://www.oracle.com/us/products/servers-storage/upgrade-advantage-program/index.html http://www.oracle.com/partners/secure/sales/oracle-ibb-program-for-partners-184291.html Contacts: For questions, please contact your favorite Oracle Partner Account Manager.

    Read the article

  • A Virtual Seat at the Architect&rsquo;s Table

    - by Bob Rhubart
    I always have fun producing the Arch2Arch podcasts, but the latest batch was all that and a bag of chips, since I was required to do absolutely no preparation and very little talking, and since the conversation was reminiscent of those I’ve had with various architects (you know who you are) in various watering holes: free-ranging, extemporaneous, and far, far from dull. The three most recent programs were recorded during a virtual mini meet-up of architects back in February.  You’ll find more detail here, but in a nutshell, I invited several previous Arch2Arch panelists to join me on Skype to talk about whatever was on their minds.  The resulting conversation yielded the three latest programs. Check them out – it’s like you’re sitting at the table. Listen to Part 1 Listen to Part 2 Listen to Part 3 The conversation begins with the participant’s responses to my challenge to fill in the blank in the sentence “Most conversations about Enterprise Architecture are too ____.” From there the conversation morphed into a discussion of the sheer joy of finding funding for architecture projects. The architects seated at the virtual table in these programs are:  Todd Biske, a veteran enterprise architect and the author of the book SOA Governace, from Packt Publishing. ( LinkedIn | Twitter | Blog | Oracle Mix ) Jordan Braunstein, an Oracle ACE Director and the Business Integration and Architecture Partner at TUSC. (Blog | Twitter | LinkedIn | Oracle Mix) Basheer Khan,  also an Oracle ACE Director, and the founder and CEO of Innowave Technology (Blog | LinkedIn | Twitter | Oracle Mix) Pat Shepherd, an enterprise architect with the Oracle Enterprise Solutions Group. (Oracle Mix | LinkedIn | Blog) Coming Soon I was so pleased with the results of this meet-up format that I did the same thing for the next series of programs.  These free-ranging conversations feature a different group of participants, covering a different set topics, including the fear of SOA, the misunderstanding and misinformation behind that fear, and the idea of beauty in architecture. Yeah, you read that right. So stay tuned: RSS   Technorati Tags: oracle,otn,enterprise architecture,podcast. arch2arch,meet-up del.icio.us Tags: oracle,otn,enterprise architecture,podcast. arch2arch,meet-up

    Read the article

  • Upgraded from 11.4 to 11.10, There was an error, now the system won't initiialize

    - by Eric
    This morning the system gave me a message that my Version (11.4) was no longer supported, and I took the 'upgrade' option (- 11.10). While installing the various components I encountered a message to the effect that the there was an error and the system may have become unusable. Among the messges: E:Sup-process /usr/bin/dpkg received a segmentation fault...returned an error code (1). I was given an option to do several things, one of which seemed to mean that it would attempt to roll back to the previous version (the default), which I took. After the process ran it said the upgrade process had finished, but there were errors. I attempted to initialize a console so I could enter ubuntu-bug update-manager /var/log/dist-upgrade, per the instructions I received when I received the error message, but the console failed during initialization. I restarted the machine, and the screen has stopped with the following contents: * Starting bluetooth * Stopping save kernel messages * Starting CUPS printing spooler/server * PulseAudio configured per-user sessions saned disabled: edit /etc/default/saned $starting up Cisco VPN daemon *Starting anac(h)ronistic cron *Stopping anac(h)ronistic cron Each of these steps followed by [ OK ] What are my options? Any help appreciated!

    Read the article

  • Encapsulating code in F# (Part 2)

    - by MarkPearl
    In part one of this series I showed an example of encapsulation within a local definition. This is useful to know so that you are aware of the scope of value holders etc. but what I am more interested in is encapsulation with regards to generating useful F# code libraries in .Net, this is done by using Namespaces and Modules. Lets have a look at some C# code first… using System; namespace EncapsulationNS { public class EncapsulationCLS { public static void TestMethod() { Console.WriteLine("Hello"); } } } Pretty simple stuff… now the F# equivalent…. namespace EncapsulationNS module EncapsulationMDL = let TestFunction = System.Console.WriteLine("Hello") ()   Even easier… lets look at some specifics about F# namespaces… Namespaces are open. meaning you can have multiple source files and assemblies can contribute to the same namespace. So, Namespaces are a great way to group modules together, so the question needs to be asked, what role do modules play. For me, the F# module is in many ways similar to the vb6 days of modules. In vb6 modules were separate files and simply allowed us to group certain methods together. I find it easier to visualize F# modules this way than to compare them to the C# classes. However that being said one is not restricted to one module per file – there is flexibility to have multiple modules in one code file however with my limited F# experience I would still recommend using the file as the standard level of separating modules as it is very easy to then find your way around a solution. An important note about interop with F# and other .Net languages. I wrote a blog post a while back about a very basic F# to C# interop. If I were to reference an F# library in a C# project (for instance ‘TestFunction’), in C# it would show this method as a static method call, meaning I would not have to instantiate an instance of the module.

    Read the article

  • How much a programmer should read in order to keep himself updated? [closed]

    - by anything
    There are lots of technical books available. Below are few links which lists some good books If you could only have one programming related book on your bookshelf what would it be and why? What non-programming books should a programmer read to help develop programming/thinking skills? Best books on the theory and practice of software architecture? http://stackoverflow.com/questions/1711/what-is-the-single-most-influential-book-every-programmer-should-read ... and the list can go on and on and on. It will be really difficult to read all of the above mentioned books. I am not sure if its even possible for anyone to do that. Even if you filter it based on one's area of interest or work, list is still very large. .. and the technology keeps on changing (even more books :-( ) So, my question is how much a programmer should read lets say per year? How much hours one should put in such activities to keep oneself up to date? How do we find out the time required? PS: Average programmer reads less than one book per year (Code complete). What about the good programmers?

    Read the article

  • How do I properly implement zooming in my game?

    - by Rudy_TM
    I'm trying to implement a zoom feature but I have a problem. I am zooming in and out a camera with a pinch gesture, I update the camera each time in the render, but my sprites keep their original position and don't change with the zoom in or zoom out. The Libraries are from libgdx. What am I missing? private void zoomIn() { ((OrthographicCamera)this.stage.getCamera()).zoom += .01; } public boolean pinch(Vector2 arg0, Vector2 arg1, Vector2 arg2, Vector2 arg3) { // TODO Auto-generated method stub zoomIn(); return false; } public void render(float arg0) { this.gl.glClear(GL10.GL_DEPTH_BUFFER_BIT | GL10.GL_COLOR_BUFFER_BIT); ((OrthographicCamera)this.stage.getCamera()).update(); this.stage.draw(); } public boolean touchDown(int arg0, int arg1, int arg2) { this.stage.toStageCoordinates(arg0, arg1, point); Actor actor = this.stage.hit(point.x, point.y); if(actor instanceof Group) { ((LevelSelect)((Group) actor).getActors().get(0)).touched(); } return true; } Zoom In Zoom Out

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • What should a programmer's yearly routine be to maximize their technical skills?

    - by sguptaet
    2 years ago I made a big career change into programming. I learned various technologies on my own without any prior experience. I really love it and feel lucky with all the resources around us to help us learn. Books, courses, open-source, etc. There are so many avenues. I'm wondering what a good routine would be to follow to maximize my software development skills. I don't believe just building software is the way, because that leaves no time for learning new concepts or technologies. I'm looking for an answer like this: Take a new concept sabbatical/workshop 2 weeks per year. Read 1 theoretical and 1 practical programming book per year. Learn 1 additional language every 2 years. Take a 1 week vacation every 6 months. Etc. I realize that the above might sound naive and unrealistic as there are so many factors. But I'd like to know the "recipe" that you think is best that will serve as a guide for people.

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

  • HTML5 - Does it have the power to handle a large 2D game with a huge world?

    - by user15858
    I have been using XNA game studio, but due to private reasons (as well as the ability to publish anywhere & my heavy interest in isogenic engine), I would like to switch to HTML5. However, I have very high 2D graphic demands for my game. The game itself will have a HDD size of anywhere between 6GB (min) to 12GB (max) which would be a full game deployed offline. The size of the images aren't significantly large, so streaming would be entirely possible if only those assets required were streamed as needed. The game has a massive file size because of the sheer amount of content. For some images or spritesheets, they would be quite massive. (ex. a very large Dragon, which if animated in a spritesheet would be split into two 4096x4096 sheets or one 8192x8192 sheet). Most assets would be very small, and about 7MB for a full character with 15 animations in every direction (all animations not required immediately) so in the size of a few hundred KB to download before the game loads. My question, however, is if the graphical power of HTML5 is enough to animate several characters on screen at once, when it flips through frames quite rapidly. All my sprites have about 25 frames per animation, 5 directions (a spritesheet for each direction & animation), and run at 30fps. Upon changing direction, animation, or a new character entering, spritesheets would change and be constantly loading/unloading. If I pack all directions in a single sheet, it would be about 2048x2048 per sheet. Most frameworks have no problem with this, but I am afraid from what I read that HTML5's graphical capabilities will limit me. Since it takes significant time simply to animate characters in any language, I'd like a quick answer.

    Read the article

  • ASP.NET MVC Cookbook - public review

    - by asiemer
    I have recently started writing another book.  The topic of this book is ASP.NET MVC.  This book differs from my previous book in that rather than working towards building one project from end to end - this book will demonstrate specific topics from end to end.  It is a recipe book (hence the cookbook name) and will be part of the Packt Publishing cookbook series.  An example recipe in this book might be how to consume JSON, creating a master /details page, jquery modal popups, custom ActionResults, etc.  Basically anything recipe oriented around the topic of ASP.NET MVC might be acceptable.  If you are interested in helping out with the review process you can join the "ASP.NET MVC 2 Cookbook-review" group on Google here: http://groups.google.com/group/aspnet-mvc-2-cookbook-review Currently the suggested TOC for the project is listed.  Also, chapters 1, 2, and most of 8 are posted.  Chapter 5 should be available tonight or tomorrow. In addition to reporting any errors that you might find (much appreciated), I am very interested in hearing about recipes that you want included, expanded, or removed (as being redundant or overly simple).  Any input is appreciated!  Hearing user feedback after the book is complete is a little late in my opinion (unless it is positive feedback of course). Thank you!

    Read the article

  • What's the best way to use requestAnimationFrame and fixed frame rates

    - by m90
    I recently got into using the HTML5-requestAnimationFrame-API a lot on animation-heavy websites, especially after seeing the Jank Busters talk. This seems to work pretty well and really improve performance in many cases. Yet one question still persists for me: When wanting to use an animation that is NOT entirely calculated (think spritesheets for example) you will have to aim for a fixed frame rate. Of course one could go back to use setInterval again, but maybe there are other ways to tackle this. The two ways I could think of using requestAnimationFrame with a fixed frame rate are: var fps = 25; //frames per second function animate(){ //actual drawing goes here setTimeout(function(){ requestAnimationFrame(animate); }, 1000 / fps) } animate(); or var fps = 25; //frames per second var lastExecution = new Date().getTime(); function animate(){ var now = new Date().getTime(); if ((now - lastExecution) > (1000 / fps)){ //do actual drawing lastExecution = new Date().getTime(); } requestAnimationFrame(animate); } animate(); Personally, I'd opt for the second option (the first one feels like cheating), yet it seems to be more buggy in certain situations. Is this approach really worth it (especially at low frame rates like 12.5)? Are there things to be improved? Is there another way to tackle this?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >