Search Results

Search found 8611 results on 345 pages for 'apt fast'.

Page 319/345 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • How do I do JavaScript Array Animation

    - by Henry
    I'm making a game but don't know how to do Array Animation with the png Array and game Surface that I made below. I'm trying to make it so that when the Right arrow key is pressed, the character animates as if it is walking to the right and when the Left arrow key is pressed it animates as if it is walking to the left (kind of like Mario). I put everything on a surface instead of the canvas. Everything is explained in the code below. I couldn't find help on this anywhere. I hope what I got below makes sense. I'm basically a beginner with JavaScript. I'll be back if more is needed: <!doctype html5> <html> <head></head> <script src="graphics.js"></script> <script src="object.js"></script> <body onkeydown ="keyDown(event)" onkeyup ="keyUp(event)" ></body> <script> //"Surface" is where I want to display my animation. It's like the HTML // canvas but it's not that. It's just the surface to where everything in the //game and the game itself will be displayed. var Surface = new Graphics(600, 400, "skyblue"); //here's the array that I want to use for animation var player = new Array("StandsRight.png", "WalksRight.png", "StandsLeft.png","WalksLeft.png" ); //Here is the X coordinate, Y coordinate, the beginning png for the animation, //and the object's name "player." I also turned the array into an object (but //I don't know if I was supposed to do that or not). var player = new Object(50, 100, 40, 115, "StandsRight.png","player"); //When doing animation I know that it requires a "loop", but I don't // know how to connect it so that it works with the arrays so that //it could animate. var loop = 0; //this actually puts "player" on screen. It makes player visible and //it is where I would like the animation to occur. Surface.drawObject(player); //this would be the key that makes "player" animation in the righward direction function keyDown(e) { if (e.keyCode == 39); } //this would be the key that makes "player" animation in the leftward direction function keyUp(e){ if (e.keyCode == 39); } //this is the Mainloop where the game will function MainLoop(); //the mainloop functionized function MainLoop(){ //this is how fast or slow I could want the entire game to go setTimeout(MainLoop, 10); } </script> </html> From here, are the "graphic.js" and the "object.js" files below. In this section is the graphics.js file. This graphics.js part below is linked to the: script src="graphics.js" html script section that I wrote above. Basically, below is a seperate file that I used for Graphics, and to run the code above, make this graphics.js code that I post below here, a separate filed called: graphics.js function Graphics(w,h,c) { document.body.innerHTML += "<table style='position:absolute;font- size:0;top:0;left:0;border-spacing:0;border- width:0;width:"+w+";height:"+h+";background-color:"+c+";' border=1><tr><td> </table>\n"; this.drawRectangle = function(x,y,w,h,c,n) { document.body.innerHTML += "<div style='position:absolute;font-size:0;left:" + x + ";top:" + y + ";width:" + w + ";height:" + h + ";background-color:" + c + ";' id='" + n + "'></div>\n"; } this.drawTexture = function(x,y,w,h,t,n) { document.body.innerHTML += "<img style='position:absolute;font-size:0;left:" + x + ";top:" + y + ";width:" + w + ";height:" + h + ";' id='" + n + "' src='" + t + "'> </img>\n"; } this.drawObject = function(o) { document.body.innerHTML += "<img style='position:absolute;font-size:0;left:" + o.X + ";top:" + o.Y + ";width:" + o.Width + ";height:" + o.Height + ";' id='" + o.Name + "' src='" + o.Sprite + "'></img>\n"; } this.moveGraphic = function(x,y,n) { document.getElementById(n).style.left = x; document.getElementById(n).style.top = y; } this.removeGraphic = function(n){ document.getElementById(n).parentNode.removeChild(document.getElementById(n)); } } Finally, is the object.js file linked to the script src="object.js"" in the html game file above the graphics.js part I just wrote. Basically, this is a separate file too, so thus, in order to run or test the html game code in the very first section I wrote, a person has to also make this code below a separate file called: object.js I hope this helps: function Object(x,y,w,h,t,n) { this.X = x; this.Y = y; this.Velocity_X = 0; this.Velocity_Y = 0; this.Previous_X = 0; this.Previous_Y = 0; this.Width = w; this.Height = h; this.Sprite = t; this.Name = n; this.Exists = true; } In all, this game is made based on a tutorial on youtube at: http://www.youtube.com/watch?v=t2kUzgFM4lY&feature=relmfu I'm just trying to learn how to add animations with it now. I hope the above helps. If not, let me know. Thanks

    Read the article

  • Silverlight Cream Monday WP7 App Review # 1

    - by Dave Campbell
    I'm going to try something here... if it seems useful, I'll continue, if it doesn't, I'll stop... so give me feedback! There are *lots* of Apps in the WP7 Marketplace, and heaven help me, but the Marketplace sucks for finding stuff. I won't rehash what's already been said in the blogs, but I agree with one and all. I went out last Saturday to find 2 apps that I knew were released, and couldn't do so on my device. Even in the Zune app, it took quite a while to find them... ok, I'll back off a bit, because I just found out I can do 'Search' now if I know the name... I didn't think that was working before. So my thought is on Monday (like today), I will post a review of 5 apps/games I either use or have played with on my device. These are strictly my opinions, you understand, but hey... it's better than a poke in the eye with an iPhone! A few disclaimers:     Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week, and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. In this Issue:   Jingo! is the first app I bought, just to see what the experience was like. It's very much like a game we used to play in school in the Army in 1971 on paper we passed around. Sort of a cross between hangman and Mastermind, you try to figure out the hidden word in 5 tries. You get really good at 5-letter words after a while. I like this because you have to think, and you're not pressured by a clock Jingo! is by James Furdell and is $1.99 I reviewed René Schulte's Pictures Lab a while back, and have not changed my mind. This is an excellent app for playing with any photo on your device... one you've just taken, one you've synced from your PC, or one you've saved from email. I like this because you can get some cool effects for your photos, and it just works. Pictures Lab is by Schulte Software Development and is $1.99 Since I work as a consultant, and from home, I wanted something I could track my time with. I've test-driven all the contenders I could find so far on the phone, and so far I like ONTRACK! the best. If asked, I have some suggestions, but it's probably just the way I work or think. What I do like is I can tap a project to start/stop/restart a counter, and at the end of the day it shows me how much time I've been working. If there's a way to make an adjustment in case you forget to tap the counter, I don't know how to do it, and that's my biggest complaint. I like this because you can get a daily readout which you can also email as a spreadsheet. The daily results display is very good. ONTRACK! is by Qmino and is $2.99 Remember Item 4 above... I've been playing guitar for 48 years... obviously since before the invention of 'tuners', so I'm not as dependent upon these as some folks are. I've tried some in the past and have always felt I can do just as good by ear (I have perfect relative pitch). So, I gave this app by András Velvárt a dance just to see how it works, and it is surprisingly good. If you're used to one of the stage tuners this may take a little getting used to, but it does the job. The difference with this one is there is no real 'null' point inside which you can think your guitar is in tune. The soundwave stays visible on the device, and if it's moving to the right, your string is flat, if it's moving to the left, your string is sharp. Getting it exact might be tricky, but it is exact! If you need to rely on a tuner, this is a good choice in my opinion, exactly because of the sensitivity.. tune up with this and you're dead-on. Guitar Tuner is by Kinabalu Innovation Limited and is $0.99 Popper 2 is the WP7 version of a wildly popular game by Bill Reiss named Dr. Popper. You can get a trial, or you can now get a free lite version of the game. Popper 2 is a fast-paced bubble breaker game. I find it something fun to play when I just want to buzz out, but maybe the best review is that my daughter didn't want to give my phone back when I showed it to her, and always wants to grab my phone to play 'that game'. A fun distraction with great graphics and a great price Popper 2 is by Blue Rose Systems, LLC and is $1.29 Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light!   Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • Can Kind People Finish First?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In an earlier post, I expressed my undying love for KIND Snacks' products. This month's Oracle Profit magazine features an interview with KIND Healthy Snacks Founder and CEO Daniel Lubetzky entitled "Better Business". Lubetzky expresses his vision for making KIND a "not for profit only" company.  All great companies start with a good idea. In this case, that one great idea was to offer a healthy snack with ingredients you can "see and pronounce". That's one of things I really like about this company--that coupled with the fact that their snacks taste great. They compete in an over crowded playing field but I've found that it's rare to find an energy snack that both tastes good and is good for you.  A couple of interesting facts I learned from reading this article: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 9 out of 10 consumers who try a KIND bar will purchase a KIND product again and recommend it to others KIND has the highest Net Promoter Score among the top 10 brands in the nutritional bar category (I confess I've never heard about this rating before but now that I have it's pretty cool) KIND's coporate mantra, "Do the Kind Thing" both encourages people to do random acts of kindness and provides easy mechanisms for doing so. Not coincidentally, I think, KIND is indeed a story about how nice guys can finish first. KIND has doubled in size every year for the last ten  years and now employees over 300 people, with sales exceeding $120M annually. Growth Applies Pressures One thing I know for certain from interacting our with fast growing customers over the last fifteen years is that growth applies myriad pressures across the organization--resources, processes, technology systems, and leadership agility. And it's easy to forget that Oracle was once an entrepreneurial startup and experienced all those same pressures that other growing companies are experiencing today. When asked by Profit Editor in Chief Aaron Lazenby, " What sort of pressure does KIND"s growth and success place on operations?", Lubetzky responded, "We have a demand planning process right now that is manual to a significant extent, and it just takes so much management time. It takes us days and sometimes weeks to produce information that is critical to our business—and by the time we get the results, we need revised data. Our sales leadership could go out selling, but instead they’re talking to our team about forecasts." Hitching Your Wagon to Oracle Lubetzky and his team selected Oracle for what I believe is our company's greatest strength: hitch your wagon to Oracle and you can trust that we will be there for the long run with the solutions you need and financial staying power. In Lubetzky's words, "The KIND philosophy requires you to have a long-term view of things; taking shortcuts may be the fastest way to get things done, but in the long term that can come back and bite you. Oracle is the type of company—and has the kind of platform—that is here for the long term. It’s not going to go away tomorrow. And Oracle is going to invest all the necessary resources into staying ahead of the game and improving." o next time you're in the supermarket or an REI (my favorite store in the world) or any of the other 80,000 locations that carry KIND, give one a try. Maybe some day you'll want to become a KIND Brand Ambassador.   Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • Thought Oracle Usability Advisory Board Was Stuffy? Wrong. Justification for Attending OUAB: ROI

    - by ultan o'broin
    Looking for reasons tell your boss why your organization needs to join the Oracle Usability Advisory Board or why you need approval to attend one of its meetings (see the requirements)? Try phrases such as "Continued Return on Investment (ROI)", "Increased Productivity" or "Happy Workers". With OUAB your participation is about realizing and sustaining ROI across the entire applications life-cycle from input to designs to implementation choices and integration, usage and performance and on measuring and improving the onboarding and support experience. If you think this is a boring meeting of middle-aged people sitting around moaning about customizing desktop forms and why the BlackBerry is here to stay, think again! How about this for a rich agenda, all designed to engage the audience in a thought-provoking and feedback-illiciting day of swirling interactions, contextual usage, global delivery, mobility, consumerizationm, gamification and tailoring your implementation to reflect real users doing real work in real environments.  Foldable, rollable ereader devices provide a newspaper-like UK for electronic news. Or a way to wrap silicon chips, perhaps. Explored at the OUAB Europe Meeting (photograph from Terrace Restaurant in TVP. Nom.) At the 7 December 2012 OUAB Europe meeting in Oracle Thames Valley Park, UK, Oracle partners and customers stepped up to the mic and PPT decks with a range of facts and examples to astound any UX conference C-level sceptic. Over the course of the day we covered much ground, but it was all related in a contextual, flexibile, simplication, engagement way aout delivering results for business: that means solving problems. This means being about the user and their tasks and how to make design and technology transforms work into a productive activity that users and bean counters will be excited by. The sessions really gelled for me: 1. Mobile design patterns and the powerful propositions for customers and partners offered by using the design guidance with Oracle ADF Mobile. Customers' and partners' developers existing ADF developers are now productive, efficient ADF Mobile developers applying proven UX guidance using ADF Mobile components and other Oracle Fusion Middleware in the development toolkit. You can find the Mobile UX Design Patterns and Guidance on Building Mobile Apps on OTN. 2. Oracle Voice and Apps. How this medium offers so much potentual in the enterprise and offers a window in Fusion Apps cloud webservices, Oracle RightNow NLP and Nuance technology. Exciting stuff, demoed live on a mobile phone. Stay tuned for more features and modalities and how you can tailor your own apps experience.  3. Oracle RightNow Natural Language Processing (NLP) Virtual Assistant technology (Ella): how contextual intervention and learning from users sessions delivers a great personalized UX for users interacting with Ella, a fifth generation VA to solve problems and seek knowledge. 4. BYOD Keynote: A balanced keynote address contrasting Fujitsu's explaining of the conceprt, challenges, and trends and setting the expectation that BYOD must be embraced in a flexible way,  with the resolute, crafted high security enterprise requirements that nuancing the BYOD concept and proposals with the realities of their world of water tight information and device sharing policies. Fascinating stuff, as well providing anecdotes to make us thing about out own DYOD Deployments. One size does not fit all. 5. Icon Cultural Surveys Results and Insights Arising: Ever wondered about the cultural appropriateness of icons used in software UIs and how these icons assessed for global use? Or considered that social media "Like" icons might be  unacceptable hand gestures in culture or enterprise? Or do the old world icons like Save floppy disk icons still find acceptable? Well the survey results told you. Challenges must be tested, over time, and context of use is critical now, including external factors such as the internet and social media adoption. Indeed the fears about global rejection of the face and hand icons was not borne out, and some of the more anachronistic icons (checkbooks, microphones, real-to-real tape decks, 3.5" floppies for "save") have become accepted metaphors for current actions. More importantly the findings brought into focus the reason for OUAB - engage with and illicit feedback though working groups before we build anything. 6. EReaders and Oracle iBook: What is the uptake and trends of ereaders? And how about a demo of an iBook with enterprise apps content?  Well received by the audience, the session included a live running poll of ereader usage. 7. Gamification Design Jam: Fun, hands on event for teams of Oracle staff, partners and customers, actually building gamified flows, a practice that can be applied right away by customers and partners.  8. UX Direct: A new offering of usability best practices, coming to an external website for you in 2013. FInd a real user, observe their tasks, design and approve, build and measure. Simple stuff to improve apps implications no end. 9. FUSE (an internal term only, basically Fusion Simplified Experience): demo of the new Face of Fusion Applications: inherently mobile, simple to use, social, personalizable and FAST, three great demos from the HCM, CRM and ICT world on how these UX designs can be used in different ways. So, a powerful breadth and depth of UX solutions and opporunities for customers and partners to engage with and explore how they can make their users happy and benefit their business reaping continued ROI from those apps investments. Find out more about the OUAB and how to get involved here ... 

    Read the article

  • Chrome and Firefox not able to find some sites after Ubuntu's recent update?

    - by gkr
    Loading of revision3.com in Chrome stops and status saying "waiting for static.inplay.tubemogul.com" grooveshark.com in Chrome never loads But wikipedia.org, google.com works just normal same behavior in Firefox too. I use wired DSL connection in Ubuntu 12.04. I guess this started happening after I upgraded the Chrome browser or Flash plugin few days ago from Ubuntu updates through update-manager. Thanks EDIT: Everything works normal in my WinXP. Problem is only in Ubuntu 12.04. This is what I get when I use wget gkr@gkr-desktop:~$ wget revision3.com --2012-06-12 08:58:01-- http://revision3.com/ Resolving revision3.com (revision3.com)... 173.192.117.198 Connecting to revision3.com (revision3.com)|173.192.117.198|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html' [ ] 81,046 32.0K/s in 2.5s 2012-06-12 08:58:04 (32.0 KB/s) - `index.html' saved [81046] gkr@gkr-desktop:~$ wget static.inplay.tubemogul.com --2012-06-12 08:51:25-- http://static.inplay.tubemogul.com/ Resolving static.inplay.tubemogul.com (static.inplay.tubemogul.com)... 72.21.81.253 Connecting to static.inplay.tubemogul.com (static.inplay.tubemogul.com)|72.21.81.253|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-06-12 08:51:25 ERROR 404: Not Found. grooveshark.com nevers responses so I have to Ctrl+C to terminate wget. gkr@gkr-desktop:~$ wget grooveshark.com --2012-06-12 08:51:33-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~$ This is the apt term.log of updates I mentioned Log started: 2012-06-09 04:51:45 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 161392 files and directories currently installed.) Preparing to replace google-chrome-stable 19.0.1084.52-r138391 (using .../google-chrome-stable_19.0.1084.56-r140965_i386.deb) ... Unpacking replacement google-chrome-stable ... Preparing to replace libpulse0 1:1.1-0ubuntu15 (using .../libpulse0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse0 ... Preparing to replace libpulse-mainloop-glib0 1:1.1-0ubuntu15 (using .../libpulse-mainloop-glib0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse-mainloop-glib0 ... Preparing to replace libpulsedsp 1:1.1-0ubuntu15 (using .../libpulsedsp_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulsedsp ... Preparing to replace sudo 1.8.3p1-1ubuntu3.2 (using .../sudo_1.8.3p1-1ubuntu3.3_i386.deb) ... Unpacking replacement sudo ... Preparing to replace flashplugin-installer 11.2.202.235ubuntu0.12.04.1 (using .../flashplugin-installer_11.2.202.236ubuntu0.12.04.1_i386.deb) ... Unpacking replacement flashplugin-installer ... Preparing to replace pulseaudio-utils 1:1.1-0ubuntu15 (using .../pulseaudio-utils_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-utils ... Preparing to replace pulseaudio 1:1.1-0ubuntu15 (using .../pulseaudio_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio ... Preparing to replace pulseaudio-module-gconf 1:1.1-0ubuntu15 (using .../pulseaudio-module-gconf_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-gconf ... Preparing to replace pulseaudio-module-x11 1:1.1-0ubuntu15 (using .../pulseaudio-module-x11_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-x11 ... Preparing to replace shared-mime-info 1.0-0ubuntu4 (using .../shared-mime-info_1.0-0ubuntu4.1_i386.deb) ... Unpacking replacement shared-mime-info ... Preparing to replace pulseaudio-module-bluetooth 1:1.1-0ubuntu15 (using .../pulseaudio-module-bluetooth_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-bluetooth ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for update-notifier-common ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz Installing from local file /tmp/tmpNrBt4g.gz Flash Plugin installed. Processing triggers for doc-base ... Processing 1 changed doc-base file... Registering documents with scrollkeeper... Setting up google-chrome-stable (19.0.1084.56-r140965) ... Setting up libpulse0 (1:1.1-0ubuntu15.1) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.1) ... Setting up libpulsedsp (1:1.1-0ubuntu15.1) ... Setting up sudo (1.8.3p1-1ubuntu3.3) ... Installing new version of config file /etc/pam.d/sudo ... Setting up flashplugin-installer (11.2.202.236ubuntu0.12.04.1) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up pulseaudio-utils (1:1.1-0ubuntu15.1) ... Setting up pulseaudio (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-gconf (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-x11 (1:1.1-0ubuntu15.1) ... Setting up shared-mime-info (1.0-0ubuntu4.1) ... Setting up pulseaudio-module-bluetooth (1:1.1-0ubuntu15.1) ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for libc-bin ... ldconfig deferred processing now taking place Log ended: 2012-06-09 04:53:32 This is from history.log Start-Date: 2012-06-09 04:51:45 Commandline: aptdaemon role='role-commit-packages' sender=':1.56' Upgrade: shared-mime-info:i386 (1.0-0ubuntu4, 1.0-0ubuntu4.1), pulseaudio:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse-mainloop-glib0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), sudo:i386 (1.8.3p1-1ubuntu3.2, 1.8.3p1-1ubuntu3.3), pulseaudio-module-bluetooth:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-x11:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), flashplugin-installer:i386 (11.2.202.235ubuntu0.12.04.1, 11.2.202.236ubuntu0.12.04.1), pulseaudio-utils:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-gconf:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulsedsp:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), google-chrome-stable:i386 (19.0.1084.52-r138391, 19.0.1084.56-r140965) End-Date: 2012-06-09 04:53:32

    Read the article

  • 5 Android Keyboard Replacements to Help You Type Faster

    - by Chris Hoffman
    Android allows developers to replace its keyboard with their own keyboard apps. This has led to experimentation and great new features, like the gesture-typing feature that’s made its way into Android’s official keyboard after proving itself in third-party keyboards. This sort of customization isn’t possible on Apple’s iOS or even Microsoft’s modern Windows environments. Installing a third-party keyboard is easy — install it from Google Play, launch it like another app, and it will explain how to enable it. Google Keyboard Google Keyboard is Android’s official keyboard, as seen on Google’s Nexus devices. However, there’s a good chance your Android smartphone or tablet comes with a keyboard designed by its manufacturer instead. You can install the Google Keyboard from Google Play, even if your device doesn’t come with it. This keyboard offers a wide variety of features, including a built-in gesture-typing feature, as popularized by Swype. It also offers prediction, including full next-word prediction based on your previous word, and includes voice recognition that works offline on modern versions of Android. Google’s keyboard may not offer the most accurate swiping feature or the best autocorrection, but it’s a great keyboard that feels like it belongs in Android. SwiftKey SwiftKey costs $4, although you can try it free for one month. In spite of its price, many people who rarely buy apps have been sold on SwiftKey. It offers amazing auto-correction and word-prediction features. Just mash away on your touch-screen keyboard, typing as fast as possible, and SwiftKey will notice your mistakes and type what you actually meant to type. SwiftKey also now has built-in support for gesture-typing via SwiftKey Flow, so you get a lot of flexibility. At $4, SwiftKey may seem a bit pricey, but give the month-long trial a try. A great keyboard makes all the typing you do everywhere on your phone better. SwiftKey is an amazing keyboard if you tap-to-type rather than swipe-to-type. Swype While other keyboards have copied Swype’s swipe-to-type feature, none have completely matched its accuracy. Swype has been designing a gesture-typing keyboard for longer than anyone else and its gesture feature still seems more accurate than its competitors’ gesture support. If you use gesture-typing all the time, you’ll probably want to use Swype. Swype can now be installed directly from Google Play without the old, tedious process of registering a beta account and sideloading the Swype app. Swype offers a month-long free trial and the full version is available for $1 afterwards. Minuum Minuum is a crowdfunded keyboard that is currently still in beta and only supports English. We include it here because it’s so interesting — it’s a great example of the kind of creativity and experimentation that happens when you allow developers to experiment with their own forms of keyboard. Minuum uses a tiny, minimum keyboard that frees up your screen space, so your touch-screen keyboard doesn’t hog your device’s screen. Rather than displaying a full keyboard on your screen, Minuum displays a single row of letters.  Each letter is small and may be difficult to hit, but that doesn’t matter — Minuum’s smart autocorrection algorithms interpret what you intended to type rather than typing the exact letters you press. Just swipe to the right to type a space and accept Minuum’s suggestion. At $4 for a beta version with no trial, Minuum may seem a bit pricy. But it’s a great example of the flexibility Android allows. If there’s a problem with this keyboard, it’s that it’s a bit late — in an age of 5″ smartphones with 1080p screens, full-size keyboards no longer feel as cramped. MessagEase MessagEase is another example of a new take on text input. Thankfully, this keyboard is available for free. MessagEase presents all letters in a nine-button grid. To type a common letter, you’d tap the button. To type an uncommon letter, you’d tap the button, hold down, and swipe in the appropriate direction. This gives you large buttons that can work well as touch targets, especially when typing with one hand. Like any other unique twist on a traditional keyboard, you’d have to give it a few minutes to get used to where the letters are and the new way it works. After giving it some practice, you may find this is a faster way to type on a touch-screen — especially with one hand, as the targets are so large. Google Play is full of replacement keyboards for Android phones and tablets. Keyboards are just another type of app that you can swap in. Leave a comment if you’ve found another great keyboard that you prefer using. Image Credit: Cheon Fong Liew on Flickr     

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • Which of these algorithms is best for my goal?

    - by JonathonG
    I have created a program that restricts the mouse to a certain region based on a black/white bitmap. The program is 100% functional as-is, but uses an inaccurate, albeit fast, algorithm for repositioning the mouse when it strays outside the area. Currently, when the mouse moves outside the area, basically what happens is this: A line is drawn between a pre-defined static point inside the region and the mouse's new position. The point where that line intersects the edge of the allowed area is found. The mouse is moved to that point. This works, but only works perfectly for a perfect circle with the pre-defined point set in the exact center. Unfortunately, this will never be the case. The application will be used with a variety of rectangles and irregular, amorphous shapes. On such shapes, the point where the line drawn intersects the edge will usually not be the closest point on the shape to the mouse. I need to create a new algorithm that finds the closest point to the mouse's new position on the edge of the allowed area. I have several ideas about this, but I am not sure of their validity, in that they may have far too much overhead. While I am not asking for code, it might help to know that I am using Objective C / Cocoa, developing for OS X, as I feel the language being used might affect the efficiency of potential methods. My ideas are: Using a bit of trigonometry to project lines would work, but that would require some kind of intense algorithm to test every point on every line until it found the edge of the region... That seems too resource intensive since there could be something like 200 lines that would have each have to have as many as 200 pixels checked for black/white.... Using something like an A* pathing algorithm to find the shortest path to a black pixel; however, A* seems resource intensive, even though I could probably restrict it to only checking roughly in one direction. It also seems like it will take more time and effort than I have available to spend on this small portion of the much larger project I am working on, correct me if I am wrong and it would not be a significant amount of code (100 lines or around there). Mapping the border of the region before the application begins running the event tap loop. I think I could accomplish this by using my current line-based algorithm to find an edge point and then initiating an algorithm that checks all 8 pixels around that pixel, finds the next border pixel in one direction, and continues to do this until it comes back to the starting pixel. I could then store that data in an array to be used for the entire duration of the program, and have the mouse re-positioning method check the array for the closest pixel on the border to the mouse target position. That last method would presumably execute it's initial border mapping fairly quickly. (It would only have to map between 2,000 and 8,000 pixels, which means 8,000 to 64,000 checked, and I could even permanently store the data to make launching faster.) However, I am uncertain as to how much overhead it would take to scan through that array for the shortest distance for every single mouse move event... I suppose there could be a shortcut to restrict the number of elements in the array that will be checked to a variable number starting with the intersecting point on the line (from my original algorithm), and raise/lower that number to experiment with the overhead/accuracy tradeoff. Please let me know if I am over thinking this and there is an easier way that will work just fine, or which of these methods would be able to execute something like 30 times per second to keep mouse movement smooth, or if you have a better/faster method. I've posted relevant parts of my code below for reference, and included an example of what the area might look like. (I check for color value against a loaded bitmap that is black/white.) // // This part of my code runs every single time the mouse moves. // CGPoint point = CGEventGetLocation(event); float tX = point.x; float tY = point.y; if( is_in_area(tX,tY, mouse_mask)){ // target is inside O.K. area, do nothing }else{ CGPoint target; //point inside restricted region: float iX = 600; // inside x float iY = 500; // inside y // delta to midpoint between iX,iY and tX,tY float dX; float dY; float accuracy = .5; //accuracy to loop until reached do { dX = (tX-iX)/2; dY = (tY-iY)/2; if(is_in_area((tX-dX),(tY-dY),mouse_mask)){ iX += dX; iY += dY; } else { tX -= dX; tY -= dY; } } while (abs(dX)>accuracy || abs(dY)>accuracy); target = CGPointMake(roundf(tX), roundf(tY)); CGDisplayMoveCursorToPoint(CGMainDisplayID(),target); } Here is "is_in_area(int x, int y)" : bool is_in_area(NSInteger x, NSInteger y, NSBitmapImageRep *mouse_mask){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSUInteger pixel[4]; [mouse_mask getPixel:pixel atX:x y:y]; if(pixel[0]!= 0){ [pool release]; return false; } [pool release]; return true; }

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • .NET development on a Retina MacBook Pro with Windows 8

    - by Jeff
    I remember sitting in Building 5 at Microsoft with some of my coworkers, when one of them came in with a shiny new 11” MacBook Air. It was nearly two years ago, and we found it pretty odd that the OEM’s building Windows machines sucked at industrial design in a way that defied logic. While Dell and HP were in a race to the bottom building commodity crap, Apple was staying out of the low-end market completely, and focusing on better design. In the process, they managed to build machines people actually wanted, and maintain an insanely high margin in the process. I stopped buying the commodity crap and custom builds in 2006, when Apple went Intel. As a .NET guy, I was still in it for Microsoft’s stack of development tools, which I found awesome, but had back to back crappy laptops from HP and Dell. After that original 15” MacBook Pro, I also had a Mac Pro tower (that I sold after three years for $1,500!), a 27” iMac, and my favorite, a 17” MacBook Pro (the unibody style) with an SSD added from OWC. The 17” was a little much to carry around because it was heavy, but it sure was nice getting as much as eight hours of battery life, and the screen was amazing. When the rumors started about a 15” model with a “retina” screen inspired by the Air, I made up my mind I wanted one, and ordered it the day it came out. I sold my 17”, after three years, for $750 to a friend who is really enjoying it. I got the base model with the upgrade to 16 gigs of RAM. It feels solid for being so thin, and if you’ve used the third generation iPad or the newer iPhone, you’ll be just as thrilled with the screen resolution. I’m typically getting just over six hours of battery life while running a VM, but Parallels 8 allegedly makes some power improvements, so we’ll see what happens. (It was just released today.) The nice thing about VM’s are that you can run more than one at a time. Primarily I run the Windows 8 VM with four cores (the laptop is quad-core, but has 8 logical cores due to hyperthreading or whatever Intel calls it) and 8 gigs of RAM. I also have a Windows Server 2008 R2 VM I spin up when I need to test stuff in a “real” server environment, and I give it two cores and 4 gigs of RAM. The Windows 8 VM spins up in about 8 seconds. Visual Studio 2012 takes a few more seconds, but count part of that as the “ReSharper tax” as it does its startup magic. The real beauty, the thing I looked most forward to, is that beautifully crisp C# text. Consolas has never looked as good as it does at 10pt. as it does on this display. You know how it looks great at 80pt. when conference speakers demo stuff on a projector? Think that sharpness, only tiny. It’s just gorgeous. Beyond that, everything is just so responsive and fast. Builds of large projects happen in seconds, hundreds of unit tests run in seconds… you just don’t spend a lot of time waiting for stuff. It’s kind of painful to go back to my 27” iMac (which would be better if I put an SSD in it before its third birthday). Are there negatives? A few minor issues, yes. As is the case with OS X, not everything scales right. You’ll see some weirdness at times with splash screens and icons and such. Chrome’s text rendering (in Windows) is apparently not aware of how to deal with higher DPI’s, so text is fuzzy (the OS X version is super sharp, however). You’ll also have to do some fiddling with keyboard settings to use the Windows 8 keyboard shortcuts. Overall, it’s as close to a no-compromise development experience as I’ve ever had. I’m not even going to bother with Boot Camp because the VM route already exceeds my expectations. You definitely get what you pay for. If this one also lasts three years and I can turn around and sell it, it’s worth it for something I use every day.

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Five development tools I can't live without

    - by bconlon
    When applying to join Geeks with Blogs I had to specify the development tools I use every day. That got me thinking, it's taken a long time to whittle my tools of choice down to the selection I use, so it might be worth sharing. Before I begin, I appreciate we all have our preferred development tools, but these are the ones that work for me. Microsoft Visual Studio Microsoft Visual Studio has been my development tool of choice for more years than I care to remember. I first used this when it was Visual C++ 1.5 (hats off to those who started on 1.0) and by 2.2 it had everything I needed from a C++ IDE. Versions 4 and 5 followed and if I had to guess I would expect more Windows applications are written in VC++ 6 and VB6 than any other language. Then came the not so great versions Visual Studio .Net 2002 (7.0) and 2003 (7.1). If I'm honest I was still using v6. 2005 was better and 2008 was simply brilliant. Everything worked, the compiler was super fast and I was happy again...then came 2010...oh dear. 2010 is a big step backwards for me. It's not encouraging for my upcoming WPF exploits that 2010 is fronted in WPF technology, with the forever growing Find/Replace dialog, the issues with C++ intellisense, and the buggy debugger. That said it is still my tool of choice but I hope they sort the issue in SP1. I've tried other IDEs like Visual Age and Eclipse, but for me Visual Studio is the best. A really great tool. Liquid XML Studio XML development is a tricky business. The W3C standards are often difficult to get to the bottom of so it's great to have a graphical tool to help. I first used Liquid Technologies 5 or 6 years back when I needed to process XML data in C++. Their excellent XML Data Binding tool has an easy to use Wizard UI (as compared to Castor or JAXB command line tools) and allows you to generate code from an XML Schema. So instead of having to deal with untyped nodes like with a DOM parser, instead you get an Object Model providing a custom API in C++, C#, VB etc. More recently they developed a graphical XML IDE with XML Editor, XSLT, XQuery debugger and other XML tools. So now I can develop an XML Schema graphically, click a button to generate a Sample XML document, and click another button to run the Wizard to generate code including a Sample Application that will then load my Sample XML document into the generated object model. This is a very cool toolset. Note: XML Data Binding is nothing to do with WPF Data Binding, but I hope to cover both in more detail another time. .Net Reflector Note: I've just noticed that starting form the end of February 2011 this will no longer be a free tool !! .Net Reflector turns .Net byte code back into C# source code. But how can it work this magic? Well the clue is in the name, it uses reflection to inspect a compiled .Net assembly. The assembly is compiled to byte code, it doesn't get compiled to native machine code until its needed using a just-in-time (JIT) compiler. The byte code still has all of the information needed to see classes, variables. methods and properties, so reflector gathers this information and puts it in a handy tree. I have used .Net Reflector for years in order to understand what the .Net Framework is doing as it sometimes has undocumented, quirky features. This really has been invaluable in certain instances and I cannot praise enough kudos on the original developer Lutz Roeder. Smart Assembly In order to stop nosy geeks looking at our code using a tool like .Net Reflector, we need to obfuscate (mess up) the byte code. Smart Assembly is a tool that does this. Again I have used this for a long time. It is very quick and easy to use. Another excellent tool. Coincidentally, .Net Reflector and Smart Assembly are now both owned by Red Gate. Again kudos goes to the original developer Jean-Sebastien Lange. TortoiseSVN SVN (Apache Subversion) is a Source Control System developed as an open source project. TortoiseSVN is a graphical UI wrapper over SVN that hooks into Windows Explorer to enable files to be Updated, Committed, Merged etc. from the right click menu. This is an essential tool for keeping my hard work safe! Many years ago I used Microsoft Source Safe and I disliked CVS type systems. But TortoiseSVN is simply the best source control tool I have ever used. --- So there you have it, my top 5 development tools that I use (nearly) every day and have helped to make my working life a little easier. I'm sure there are other great tools that I wish I used but have never heard of, but if you have not used any of the above, I would suggest you check them out as they are all very, very cool products. #

    Read the article

  • Xobni Plus for Outlook [Review]

    - by The Geek
    Overview Xobni Plus is an addin that will bring a sidebar to Outlook which allows you to search through your inbox and contacts a lot easier. It provides the ability to search and keep track of your favorite social networks. Searching with Xobni is a lot more powerful than the default search feature in Outlook. It let’s you drill down your searches to conversations, email, links, and attachments. It now supports Outlook 2010 both 32 & 64-bit versions. Installation & Setup Installation is easy following the wizard. After completing the wizard you can tell you’re friends on Facebook and Twitter that you are now using it. You can also decide to join their Product Improvement Program if you want. After installation when you open Outlook, Xobni appears as a sidebar on the right side. Don’t worry about it always being in the way, as you can hide it if you need more room for other Outlook functions. After Xobni free is installed, you can upgrade to the Plus version at any time. A new window will open up and you can use your Credit Card, PayPal, or redeem a code if you have one. Features & Use Where to begin with the amount of features available in Xobni Plus? It really has an amazing amount of cool features. Of course you’ll have all of the features of the Free Version which we previously covered…and a lot more. After Xobni is installed you’ll notice a section for it on the Ribbon. From here you can search Xobni, show or hide the Sidebar, and change other options. It allows you to easily keep up with various social networks like Facebook, Twitter, and LinkedIn… Check out email analytics and contact ranks. Click on the Files Exchanged tab to search for specific attachments. Quickly search links exchanged with your contacts. Hover over a link to get a preview of what it entails. It gives you the ability to index all of your Yahoo mail as well, without the need for purchasing Yahoo Plus! Then your Yahoo messages appear in the Xobni sidebar. When you select a contact you can see related messages from you Yahoo account. Easily index all of your mail…including Yahoo mail for better organization and faster search results. There are several options you can select to change the way Xobni works. From setting up your Yahoo email, Indexing options, and much more. Additional Features of Xobni Plus Advanced Search Capabilities – Filter results, Boolean & Phrase Search, Ability to search Appointments & Tasks, Advanced Search Builder Search unlimited PST data files Xobni contacts in the compose screen Find links exchanged with your contacts View calendar appointments One year premium tech support No Ads! Performance We ran Xobni Plus on Outlook 2010 32-bit on a Dual-Core AMD Athlon system with 4GB of RAM and found it to run quite smoothly. However, we did notice it would sometimes slow down launching Outlook, especially if other apps are running at the same time. Product Support When you buy a license for Xobni Plus you get a full year of premium tech support. They provide a Questions and Answers page on their site where you can run a search query and answers appear instantly. You can contact support directly as a Plus member through their web form and they advise the turn around time is 2 business days. However, when we tested it, we received a response within 24 hours. They also provide FAQ, Community forum, and you can download the Owners Manual in PDF format from the support page. Conclusion Xobni Plus is a very powerful addin for Outlook and includes a lot more features that we didn’t cover in this review. You can download Xobni free edition which includes an 8 day free trial of the Plus version. This provides a good way to start getting familiar with it. Then upgrade to Xobni Plus at any time for $29.95. Once you get started, you’ll find the sidebar is nicely laid out and intuitive to use. If you live out of Outlook during the day, Xobni Plus is a great addition for fast and powerful searches. It provides an easy way to keep all of your contacts and messages well organized and easy to find. Xobni Plus works with XP, Vista, and Windows 7 (32 & 64-bit editions) Outlook 2003, 2007 and both 32 & 64-bit editions of Outlook 2010. Download Xobni Plus Download Xobni Free Edition Rating Installation: 8 Ease of Use: 8 Features: 9 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Xobni Free Powers Up Outlook’s Search and ContactsCreate an Email Template in Outlook 2003Add Social Elements to Your Gmail Contacts with RapportiveChange Outlook Startup FolderClear Outlook Searches and MRU (Most Recently Used) Lists TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Airline mess - what a journey

    - by Mike Dietrich
    What a day, what a journey ... Flew this noon from Munich to Zuerich for catch my ongoing flight to San Francisco with Swiss. And that day did start very well as Lufthansa messed up the connection flight by 42 minutes for a 35 minute flight. And as I was obviously the only passenger connection to San Francisco nobody picked me up at the airplane to bring me directly to my connection as Swiss did for the 8 passengers connection to Miami. So I missed my flight. What a start - and many thanks to Lufthansa. I was not the only one missing a connection as Lufthansa/Swiss had canceled the flight before due to "technical problems". In Zuerich Swiss did rebook me via Frankfurt with Lufthansa to board a United Airlines flight to San Francisco. "Ouch" I thought. I had my share of experience with United already as they've messed up my luggage on the way to San Francisco some years ago and it took them five (!!!) days to fly my bag over and deliver it. But actually it was the only option today. So I said "Yes". A big mistake as I've learned later on. The Frankfurt flight was delayed as well "due to a late incoming aircraft". But there was plenty of time. And I went to the Swiss counter at the gate and let them check if my baggage is on that flight to Frankfurt. They've said "Yes". Boarding the plane with a delay of 45 minutes (the typical Lufthansa delay these days) I spotted my Rimowa trolley right next to the plane on the airfield. So I was sure that it will be send to Frankfurt. In Frankfurt I went to the United counter once it did open - had to go through the passport check they do for US flights as well - and they've said "Yes, your luggage is with us". Well ... Arriving in San Francisco with just a bit of a some minutes delay and a very fast immigration procedure I saw the first bags with Priority tags getting pushed to the baggage claim - but mine was not there. I did wait ... and wait ... and wait. Well, thanks United, you did it again!!! I flew twice in the past years United Airlines - and in both cases they've messed up my luggage on the way to San Francisco. How lovely is that ... Now the real fun started again as the lady at the "Lost and Found" counter for luggage spotted my luggage in her system in Zuerich - and told me it's supposed to be sent with LH1191 to Frankfurt on Sept 27. But this was yesterday in Europe - it's already Sept 28 - and I saw my luggage in front of the airplane. So I'd suppose it's in Frankfurt already. But what could she do? Nothing but doing the awful paperwork. And "No Mr Dietrich, we don't call international numbers". Thank you, United. Next time I'll try to get a contract for a US land line in advance. They can't even tell you which plane will bring your luggage. It may be tomorrow with UA flight arriving around 4pm in SFO. I'm looking forward to some hours in the wonderful United Airlines call center waiting line. Last time I did spend 60-90 minutes every day until I got my luggage. If it takes again that long then OOW will be over by then. I love airline travel - and especially with United Airlines. And by the way ... they gave us these nice fancy packages during the flight:  That looks good - what's in that box??? Yes, really ... a bag of potato chips. Pure fat - very healthy.  I doubt that I'll ever fly United Airlines again!!!

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • Optimizing a thread safe Java NIO / Serialization / FIFO Queue [migrated]

    - by trialcodr
    I've written a thread safe, persistent FIFO for Serializable items. The reason for reinventing the wheel is that we simply can't afford any third party dependencies in this project and want to keep this really simple. The problem is it isn't fast enough. Most of it is undoubtedly due to reading and writing directly to disk but I think we should be able to squeeze a bit more out of it anyway. Any ideas on how to improve the performance of the 'take'- and 'add'-methods? /** * <code>DiskQueue</code> Persistent, thread safe FIFO queue for * <code>Serializable</code> items. */ public class DiskQueue<ItemT extends Serializable> { public static final int EMPTY_OFFS = -1; public static final int LONG_SIZE = 8; public static final int HEADER_SIZE = LONG_SIZE * 2; private InputStream inputStream; private OutputStream outputStream; private RandomAccessFile file; private FileChannel channel; private long offs = EMPTY_OFFS; private long size = 0; public DiskQueue(String filename) { try { boolean fileExists = new File(filename).exists(); file = new RandomAccessFile(filename, "rwd"); if (fileExists) { size = file.readLong(); offs = file.readLong(); } else { file.writeLong(size); file.writeLong(offs); } } catch (FileNotFoundException e) { throw new RuntimeException(e); } catch (IOException e) { throw new RuntimeException(e); } channel = file.getChannel(); inputStream = Channels.newInputStream(channel); outputStream = Channels.newOutputStream(channel); } /** * Add item to end of queue. */ public void add(ItemT item) { try { synchronized (this) { channel.position(channel.size()); ObjectOutputStream s = new ObjectOutputStream(outputStream); s.writeObject(item); s.flush(); size++; file.seek(0); file.writeLong(size); if (offs == EMPTY_OFFS) { offs = HEADER_SIZE; file.writeLong(offs); } notify(); } } catch (IOException e) { throw new RuntimeException(e); } } /** * Clears overhead by moving the remaining items up and shortening the file. */ public synchronized void defrag() { if (offs > HEADER_SIZE && size > 0) { try { long totalBytes = channel.size() - offs; ByteBuffer buffer = ByteBuffer.allocateDirect((int) totalBytes); channel.position(offs); for (int bytes = 0; bytes < totalBytes;) { int res = channel.read(buffer); if (res == -1) { throw new IOException("Failed to read data into buffer"); } bytes += res; } channel.position(HEADER_SIZE); buffer.flip(); for (int bytes = 0; bytes < totalBytes;) { int res = channel.write(buffer); if (res == -1) { throw new IOException("Failed to write buffer to file"); } bytes += res; } offs = HEADER_SIZE; file.seek(LONG_SIZE); file.writeLong(offs); file.setLength(HEADER_SIZE + totalBytes); } catch (IOException e) { throw new RuntimeException(e); } } } /** * Returns the queue overhead in bytes. */ public synchronized long overhead() { return (offs == EMPTY_OFFS) ? 0 : offs - HEADER_SIZE; } /** * Returns the first item in the queue, blocks if queue is empty. */ public ItemT peek() throws InterruptedException { block(); synchronized (this) { if (offs != EMPTY_OFFS) { return readItem(); } } return peek(); } /** * Returns the number of remaining items in queue. */ public synchronized long size() { return size; } /** * Removes and returns the first item in the queue, blocks if queue is empty. */ public ItemT take() throws InterruptedException { block(); try { synchronized (this) { if (offs != EMPTY_OFFS) { ItemT result = readItem(); size--; offs = channel.position(); file.seek(0); if (offs == channel.size()) { truncate(); } file.writeLong(size); file.writeLong(offs); return result; } } return take(); } catch (IOException e) { throw new RuntimeException(e); } } /** * Throw away all items and reset the file. */ public synchronized void truncate() { try { offs = EMPTY_OFFS; file.setLength(HEADER_SIZE); size = 0; } catch (IOException e) { throw new RuntimeException(e); } } /** * Block until an item is available. */ protected void block() throws InterruptedException { while (offs == EMPTY_OFFS) { try { synchronized (this) { wait(); file.seek(LONG_SIZE); offs = file.readLong(); } } catch (IOException e) { throw new RuntimeException(e); } } } /** * Read and return item. */ @SuppressWarnings("unchecked") protected ItemT readItem() { try { channel.position(offs); return (ItemT) new ObjectInputStream(inputStream).readObject(); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } catch (IOException e) { throw new RuntimeException(e); } } }

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >