Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 96/365 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Eye-Infinity across 3 displays with a Radeon 6800

    - by Peter G Mac.
    So I purchased a computer recently and have been trying to customise the display. Radeon HD 6800 series Ubuntu 10.10. I have three 22inch 1080P lcd monitors that are mounted together. Everything is working smooth. How do I get the 'big-desktop' display where I have one enormous display across all monitors? Linux - ATI Catalyst Control Center 11.2 does not give me an option to 'group' my profiles like the pictures on their site show with Windows. I have been searching all over for help. Much Obliged, -Peter

    Read the article

  • Need help starting with a game in Flash/AS3

    - by Hossein
    Hi, I am kinda new to flash/AS3. I have written some stuff and now I now the basics of AS. I want to develop a game in which my character moves. exactly like the way Super Mario moves(the background changes and new obstacles come in the way like and animation with which you can interact by climbing and shooting etc.) There is one big thing that I don't understand is that how to make my background moving and introduce new obstacles when my character moves forward. Currently my scene is static which means the background is the same and obstacles are also stationary. Can some point me to a tutorial or give me an answer that solves my confusion? thanks

    Read the article

  • Oracle Social Network Developer Challenge Winners

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Now that OpenWorld 2012 has wrapped, I have time to tell you all about what happened. Maybe you recall that Noel (@noelportugal) and I were running a modified hackathon during the show, the Oracle Social Network Developer Challenge. Without further ado, congratulations to Dimitri Gielis (@dgielis) and Martin Giffy D’Souza (@martindsouza) on their winning entry, an integration between Oracle APEX and Oracle Social Network that integrates feedback and bug submission with Oracle Social Network Conversations, allowing developers, end-users and project leaders to view and discuss the feedback on their APEX applications from within Oracle Social Network. Update: Bob Rhubart of OTN (@brhubart) interviewed Dimitri and Martin right after their big win. Money quote from Dimitri when asked what he’d buy with the $500 in Amazon gift cards, “Oracle Social Network.” Nice one. In their own words: In the developers perspective it’s important to get feedback soon, so after a first iteration and end-users start to test, they can give feedback of the application. Previously it stopped there, and it was up to the developer to communicate further with email, phone etc. With OSN every feedback and communication gets logged and other people can see the discussion immediately as well. For the end users perspective he can now communicate in a more efficient way to not only the developers, but also between themselves. Maybe many end-users (in different locations) would like to change some behaviour, by using OSN they can see the entry somebody put in with a screenshot and they can just start to chat about it. Some key technical end users can have lighten the tasks of the development team by looking at the feedback first and start to communicate with their peers. For the project manager he has now the ability to really see what communication has taken place in certain areas and can make decisions on that. Later, if things come up again, he can always go back in OSN and see what was said at that moment in time. Integrating OSN in the APEX applications enhances the user experience, makes the lives of the developers easier and gives a better overview to project managers. Incidentally, you may already know Dimitri and Martin, since both are Oracle Ace Directors. I ran into Martin at the Ace Director briefings Friday before the conference started, and at that point, he wasn’t sure he’d have time to enter the Challenge. After some coaxing, he and Dimitri agreed to give it a go and banged out their entry on Tuesday night, or more accurately, very early Wednesday morning, the day of the Challenge judging. I think they said it took them about four hours of hardcore coding to get it done, very much like a traditional hackathon, which is essentially a code sprint from idea to finished product. Here are some screenshots of the workflow they built. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } I love this idea, i.e. closing the loop between web developers and users, a very common pain point, and so did our judges. Speaking of, special thanks to our panel of three judges: Reggie Bradford (@reggiebradford), serial entrepreneur, founder of Vitrue and SVP of Cloud Product Development at Oracle Robert Hipps (@roberthipps), VP of Development for Oracle Social Network and my former boss Roland Smart (@rsmartx), VP of Social Marketing and the brains behind the Oracle Social Developer Community Finally, thanks to everyone who made this possible, including: The three other teams from HarQen (@harqen), TEAM Informatics (@teaminformatics) and Fishbowl Solutions (@fishbowle20) featuring Friend of the ‘Lab John Sim (@jrsim_uix), who finished and presented entries. I’ll be posting the details of their work this week. The one guy who finished an entry, but couldn’t make the judging, Bex Huff (@bex). Bex rallied from a hospitalization due to an allergic reaction during the show; he’s fine, don’t worry. I’ll post details of his work next week, too. The 40-plus people who registered to compete in the Challenge. Noel for all his hard work, sample code, and flying monkey target, more on that to come. The Oracle Social Network development team for supporting this event. Everyone in legal and the beta program office for their help. And finally, the Oracle Technology Network (@oracletechnet) for hosting the event and providing countless hours of operational and moral support. Sorry if I’ve missed some people, since this was a huge team effort. This event was a big success, and we plan to do similar events in the future. Stay tuned to this channel for more. 

    Read the article

  • Tiled game: how to correct load background image ?

    - by stighy
    Hi, i'm a newbie. I'm trying to develop a 2d game (top-down view). I would like to load a standard background, a textured ground... My "world" is big, for example 3000px X 3000px. I think it is not a good idea to load a 3000px x 3000px image and move it... So, how is the best practice ? To load a single small image (64x64) and repeat it for N times ? If yes, ok, but how i can manage the "background" movement ? Thanks Bye!

    Read the article

  • Legal Precautions of Customizing Ubuntu LiveCD

    - by Voulnet
    Hello everyone, the organization I work at wants to create a custom Ubuntu LiveCD, the customizations are: Pre-installed programs, plugins, some device drivers, and aesthetics such as icons and backgrounds, as well as changing Firefox's homepage and removing unneeded packages. Not big changes, obviously, and we wish to distribute this custom image for clients to use as a bootable CD or USB stick in order to have a quick environment where all our tools are available instantly. What are the licensing and legal consequences of this? What if some of the programs or plugins that are to be pre-packaged are not GPL'd? I should finally note that we are not changing any code in the kernel or any other distro component. Thank you for your time!

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • Caching large amount of ajax returned objects

    - by ofcapl
    I'm building an application which fetches large amount of items with ajax requests via other application API. It returns me 6k - 30k js objects which are used multiple times across various application views (sorting, filtering etc.). I would like to avoid querying API every time for such big list so I decided to cache this data somehow. I was thinking about various solutions: saving it to localstorage, using some caching library (e.g. locachejs), storing in js var. I'm not an expert so I would like to hear Your suggestions about each (or one of these) solution, about its pros and cons. Every help will be very appreciated.

    Read the article

  • Recovering a website

    - by Jessica
    I found my website in the Wayback Machine a few months ago, but today I've tried again and now it tells me it can't find robots.txt. My old webhost stopped paying for their servers back in August without any notice. I was going to do a backup the day it happened. Is there a way just to find the text? I have the old IP, images, but nothing else. None of the big search engines have caches anymore, and I already looked in the cache of three of my Macs with nothing to be found.

    Read the article

  • 7 Reasons for Abandonment in eCommerce and the need for Contextual Support by JP Saunders

    - by Tuula Fai
    Shopper confidence, or more accurately the lack thereof, is the bane of the online retailer. There are a number of questions that influence whether a shopper completes a transaction, and all of those attributes revolve around knowledge. What products are available? What products are on offer? What would be the cost of the transaction? What are my options for delivery? In general, most online businesses do a good job of answering basic questions around the products as the shopper engages in the online journey, navigating the product catalog and working through the checkout process. The needs that are harder to address for the shopper are those that are less concerned with product specifics and more concerned with deciding whether the transaction met their needs and delivered value. A recent study by the Baymard Institute [1] finds that more than 60% of ecommerce site visitors will abandon their shopping cart. The study also identifies seven reasons for abandonment out of the commerce process [2]. Most of those reasons come down to poor usability within the commerce experience. Distractions. External distractions within the shopper’s external environment (TV, Children, Pets, etc.) or distractions on the eCommerce page can drive shopper abandonment. Ideally, the selection and check-out process should be straightforward. One common distraction is to drive the shopper away from the task at hand through pop-ups or re-directs. The shopper engaging with support information in the checkout process should not be directed away from the page to consume support. Though confidence may improve, the distraction also means abandonment may increase. Poor Usability. When the experience gets more complicated, buyer’s remorse can set in. While knowledge drives confidence, a lack of understanding erodes it. Therefore it is important that the commerce process is streamlined. In some cases, the number of clicks to complete a purchase is lengthy and unavoidable. In these situations, it is vital to ensure that the complexity of your experience can be explained with contextual support to avoid abandonment. If you can illustrate the solution to a complex action while the user is engaged in that action and address customer frustrations with your checkout process before they arise, you can decrease abandonment. Fraud. The perception of potential fraud can be enough to deter a buyer. Does your site look credible? Can shoppers trust your brand? Providing answers on the security of your experience and the levels of protection applied to profile information may play as big a role in ensuring the sale, as does the support you provide on the product offerings and purchasing process. Does it fit? If it is a clothing item or oversized furniture item, another common form of abandonment is for the shopper to question whether the item can be worn by the intended user. Providing information on the sizing applied to clothing, physical dimensions, and limitations on delivery/returns of oversized items will also assist the sale. A photo alone of the item will help, as it answers some of those questions, but won’t assuage all customer concerns about sizing and fit. Sometimes the customer doesn’t want to buy. Prospective buyers might be browsing through your catalog to kill time, or just might not have the money to purchase the item! You are unlikely to provide any information in contextual support to increase the likelihood to buy if the shopper already has no intentions of doing so. The customer will still likely abandon. Ensuring that any questions are proactively answered as they browse through your site can only increase their likelihood to return and buy at a future date. Can’t Buy. Errors or complexity at checkout can be another major cause of abandonment. Good contextual support is unlikely to help with severe errors caused by technical issues on your site, but it will have a big impact on customers struggling with complexity in the checkout process and needing a question answered prior to completing the sale. Embedded support within the checkout process to patiently explain how to complete a task will help increase conversion rates. Additional Costs. Tax, shipping and other costs or duties can dramatically increase the cost of the purchase and when unexpected, can increase abandonment, particularly if they can’t be adequately explained. Again, a lack of knowledge erodes confidence in the purchase, and cost concerns in particular, erode the perception of your brand’s trustworthiness. Again, providing information on what costs are additive and why they are being levied can decrease the likelihood that the customer will abandon out of the experience. Knowledge drives confidence and confidence drives conversion. If you’d like to understand best practices in providing contextual customer support in eCommerce to provide your shoppers with confidence, download the Oracle Cloud Service and Oracle Commerce - Contextual Support in Commerce White Paper. This white paper discusses the process of adding customer support, including a suggested process for finding where knowledge has the most influence on your shoppers and practical step-by-step illustrations on how contextual self-service can be added to your online commerce experience. Resources: [1] http://baymard.com/checkout-usability [2] http://baymard.com/blog/cart-abandonment

    Read the article

  • Making an interface for input in C - How?

    - by tloszk
    I have a big question. I started to develop a simple 3D engine (or should I call it framework?). I use OpenGL for rendering and it is developed for Windows. It is all written in C. But I don't know, how to write an "interface" for the keyboard/mouse input. I would like to keep it as simple as possible and nice - what the Win32 "native" input system is not. If anyone has suggestions about the topic, please, tell me. Thanks for everyone, who answers to my question!

    Read the article

  • Is there a viable alternative to the agile development methodology? [closed]

    - by Eric Wilson
    The two predominant software-development methodologies are waterfall and agile. When discussing these two, there is often much focus on the particular practices that distinguish them (pair programming, TDD, etc. vs. functional spec, big up-front design, etc.) But the real differences are far deeper, in that these practices come from a philosophy. Waterfall says: Change is costly, so it should be minimized. Agile says: Change is inevitable, so make change cheap. My question is, regardless of what you think of TDD or functional specs, is the waterfall development methodology really viable? Does anyone really think that minimizing change in software is a viable option for those that desire to deliver valuable software? Or is the question really about what sort of practices work best in our situations to manage the inevitable change?

    Read the article

  • Looking Back at PASS Summit 2013 - Location

    - by RickHeiges
    Now that it has been a few weeks since the Summit, I wanted to look back at the location "experiment". Convention Center - It seemed to work well for the conference. There were quite a few areas in the area where you could sit down and get some work down or have a discussion. For the larger welcome reception the first night, I really liked the different areas. If you wanted to enjoy the Quiz Bowl, the ballroom area was set up nicely with big screens so that everyone could see and hear. The area right...(read more)

    Read the article

  • How safe is GParted when resizing Linux and Windows partitions?

    - by Olivier Pons
    I want to resize my partitions: I have 3 partitions: Ubuntu 10.04 Windows Seven Ubuntu 11.10 It's booting with the boot installed by the Ubuntu 11.10 version. I want to expand (only expand) all the 3 partitions. My HD is 1,8 Tb so it's big and I have no possibility to save before expanding. So my question is: if you tell me GParted work 99,99 % of the time, I'm willing to take the risk. If you tell me GParted work 90 % of the time, I won't take that risk.

    Read the article

  • Are there any companies using BDD in a .NET environment?

    - by Nick
    I've seen BDD in action (in this case using SpecFlow and Selenium in a .NET environment) for a small test project. I was very impressed - mainly due to the fact that the language used to specify the acceptance tests meant they engaged with the product owner much more easily. I'm now keen to bring this into my current organisation. However I'm asked 'who else uses this?' and 'show me some case-studies'. Unfortunately I cannot find any 'big names' (or even 'small names' for that matter!) of companies who are actively using BDD. I have two questions really: Is BDD adopted by companies out there? Who are they? How can BDD be implemented in an agile .NET environment and are there any significant drawbacks to doing it?

    Read the article

  • Kill a tree, save your website? Content strategy in action, part III

    - by Roger Hart
    A lot has been written about how driving content strategy from within an organisation is hard. And that's true. Red Gate is pretty receptive to new ideas, so although I've not had a total walk in the park, it's been a hike with charming scenery. But I'm one of the lucky ones. Lots of people are involved in content, and depending on your organisation some of those people might be the kind who'll gleefully call themselves "stakeholders". People holding a stake generally want to stick it through something's heart and bury it at a crossroads. Winning them over is not always easy. (Richard Ingram has made a nice visual summary of how this can feel - Content strategy Snakes & ladders - pdf ) So yes, a lot of content strategy advocates are having a hard time. And sure, we've got a nice opportunity to get together and have a hug and a cry, but in the interim we could use a hand. What to do? My preferred approach is, I'll confess, brutal. I'd like nothing so much as to take a scorched earth approach to our website. Burn it, salt the ground, and build the new one right: focusing on clearly delineated business and user content goals, and instrumented so we can tell if we're doing it right. I'm never getting buy-in for that, but a boy can dream. So how about just getting buy-in for some small, tenable improvements? Easier, but still non-trivial. I sat down for a chat with our marketing and design guys. It seemed like a good place to start, even if they weren't up for my "Ctrl-A + Delete"  solution. We talked through some of this stuff, and we pretty much agreed that our content is a bit more broken than we'd ideally like. But to get everybody on board, the problems needed visibility. Doing a visual content inventory Print out the internet. Make a Wall Of Content. Seriously. If you've already done a content inventory, you know your architecture, and you know the scale of the problem. But it's quite likely that very few other people do. So make it big and visual. I'm going to carbon hell, but it seems to be working. This morning, I printed out a tiny, tiny part of our website: the non-support content pertaining to SQL Compare I made big, visual, A3 blowups of each page, and covered a wall with them. A page per web page, spread over something like 6M x 2M, with metrics, right in front of people. Even if nobody reads it (and they are doing) the sheer scale is shocking. 53 pages, all told. Some are redundant, some outdated, some trivial, a few fantastic, and frighteningly many that are great ideas delivered not-quite-right. You have to stand quite far away to get it all in your field of vision. For a lot of today, a whole bunch of folks have been gawping in amazement, talking each other through it, peering at the details, and generally getting excited about content. Developers, sales guys, our CEO, the marketing folks - they're engaged. Will it last? I make no promises. But this sort of wave of interest is vital to getting a content strategy project kicked off. While the content strategist is a saucer-eyed orphan in the cupboard under the stairs, they're not getting a whole lot done. Of course, just printing the site won't necessarily cut it. You have to know your content, and be able to talk about it. Ideally, you'll also have page view and time-on-page metrics. One of the most powerful things you can do is, when people are staring at your wall of content, ask them what they think half of it is for. Pretty soon, you've made a case for content strategy. We're also going to get folks to mark it up - cover it with notes and post-its, let us know how they feel about our content. I'll be blogging about how that goes, but it's exciting. Different business functions have different needs from content, so the more exposure the content gets, and the more feedback, the more you know about those needs. Fingers crossed for awesome.

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • Design Code Outside of an IDE (C#)?

    - by ryanzec
    Does anyone design code outside of an IDE? I think that code design is great and all but the only place I find myself actually design code (besides in my head) is in the IDE itself. I generally think about it a little before hand but when I go to type it out, it is always in the IDE; no UML or anything like that. Now I think having UML of your code is really good because you are able to see a lot more of the code on one screen however the issue I have is that once I type it in UML, I then have to type the actual code and that is just a big duplicate for me. For those who work with C# and design code outside of Visual Studio (or at least outside Visual Studio's text editor), what tools do you use? Do those tools allow you to convert your design to actual skeleton code? It is also possible to convert code to the design (when you update the code and need an updated UML diagram or whatnot)?

    Read the article

  • Should I reorder partitions regarding performance

    - by Marcel
    I have in principal 3 big partitions on my Netbook. One Windows, one for shared files, one for Ubuntu. I recently find out (using hdparm) the the hardisk seems to have much better perfomance on the first 2/3 (~ 60MB/s) than on the last 1/3 (~ 40MB/s). I am thinking to delete the second partition and create new partitions for "swap" and / directly after Windows. Does this effort make sense? I also wanna upgrade to 10.4/10.10 but keep the option to go back to the old system, so maybe I install ubuntu completely in a/this new partition?

    Read the article

  • Installed Ubuntu in VMware Player, Black side bars / broken GUI

    - by Eric
    I recently installed Ubuntu 12.10 in VMware Player and have came up with a black sidebar, missing icons, a pretty broken GUI. Everything works just fine though. I am able to run Firefox and open termainal and all that good stuff just fine, it's just that I can SEE them on the sidebar. I have to open up a seperate window on Windows with a picture of the Ubuntu 12.10 desktop in order for me to know what to click on, but once I do click on it, it's pretty much smooth sailing from there(not counting closing Firefox and several other things). Again, everything works just fine, but when it comes to the sidebar, the GUI, the dashboard (get a completely black screen for when I open dash board), they come up as completely black, broken (visual tears and what not), and hoving over them just brings up a big black bar (assuming it's the "zooming" in of the icon, but it just shows a black bar of where the icon should be). I'm not exactly sure what so do to get this to work (to fix the GUI), any ideas as to what I may do to fix this?

    Read the article

  • genetic algorithm for leveling/build test

    - by Renan Malke Stigliani
    I'm starting o build a online PVP (duel like, one-to-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I never did anything like that, I'm still thinking about the maths behind the level/skill/special balances. So I thought good way of testing the best/combo builds would implement a Genetic Algorith. It'd be like that: Generate a big portion of random characters Make them fight, level them up accordingly to the victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try to make even best characters Add some more random chars, emulating new players Repeat the process for some time, or util find some chars who can beat everyone butts So I could play with the math and try to find the balance where the top x% chars would be a mix of various build types. So, is it a good idea, or there are some other easier method to do the balance? PS: I like this also, because it sounds funny

    Read the article

  • Using model tools as map editor

    - by cooky451
    I want to make a game which would require a 3D map editor. Of course, I would like to avoid creating such an editor. My idea is now to use modeling tools (3DS Max, Maya, Blender) to create the map, and to give game specific objects specified names. This way I'd just need to write an COLLADA - native map format converter. But I'm not sure if this is possible the way I imagine it, that's why I'd like to hear your thoughts on the matter. Are modeling tools suitable to create big open world maps? Can this "naming convention"-idea for game specific objects work? Are the modeling tools able to export a scene in chunks / in a way that occlusion culling and collision detection can be properly done? If not: Is there a way to build a suitable data structure from the exported data?

    Read the article

  • Is it possible to use RubyGnome2's/QtRuby's HTML renderers to make UI for a Ruby script?

    - by mjr
    I'd like to make a graphical user interface for my script, instead of running it from the console. I'm aware there's a wealth of UI libraries for Ruby, but I'm quite familiar with HTML and CSS and I'd like to use them for building an interface. So the question is: Is it possible to use a HTML rendering library to make such an UI? From what I understand, it's relatively easy to put in a HTML rendered view of something, but is it possible to communicate back with the script? Like when I push that big red button, it actually tells the script to act on it? Obviously it's possible if the script run on the server side, but I'd like to run it as a desktop application.

    Read the article

  • SQLAuthority News – 7th Anniversary of Blog – A Personal Note

    - by Pinal Dave
    Special Day Today is a very special day – seven years ago I blogged for the very first time.  Seven years ago, I didn’t know what I was doing, I didn’t know how to blog, or even what a blog was or what to write.  I was working as a DBA, and I was trying to solve a problem – at my job, there were a few issues I had to fix again and again and again.  There were days when I was rewriting the same solution over and over, and there were times when I would get very frustrated because I could not write the same elegant solution that I had written before.  I came up with a solution to this problem – posting these solutions online, where I could access them whenever I needed them.  At that point, I had no idea what a blog was, or even how the internet worked, I had no idea that a blog would be visible to others.  Can you believe it? Google it on Yahoo! After a few posts on this “blog,” there was a surprise for me – an e-mail saying that someone had left me a comment.  I was surprised, because I didn’t even know you could comment on a blog!  I logged on and read my comment.  It said: “I like your script,but there is a small bug.  If you could fix it, it will run on multiple other versions of SQL Server.”  I was like, “wow, someone figured out how to find my blog, and they figured out how to fix my script!”  I found the bug, I fixed the script, and a wrote a thank you note to the guy.  My first question for him was: how did you figure it out – not the script, but how to find my blog?  He said he found it from Yahoo Search (this was in the time before Google, believe it or not). From that day, my life changed.  I wrote a few more posts, I got a few more comments, and I started to watch my traffic.  People were reading, commenting, and giving feedback.  At the end of the day, people enjoyed what I was writing.  This was a fantastic feeling!  I never thought I would be writing for others.  Even today, I don’t feel like I am writing for others, but that I am simply posting what I am learning every day.  From that very first day, I decided that I would not change my intent or my blog’s purpose. 72 Million Views – 2600 Posts – 57000 comments – 10 books – 9 courses Today, this blog is my habit, my addiction, my baby.  Every day I try to learn something new, and that lesson gets posted on the blog.  Lately there have been days where I am traveling for a full 24 hours, but even on those days I try to learn something new, and later when I have free time, I will still post it to the blog.  Because of this habit, this blog has over 72 millions views, I have written more than 2600 posts, and there are 57,000 comments and counting.  I have also written 10 books, 9 courses, and learned so many things.  This blog has given me back so much more than I ever put it into it.  It gave me an education, a reason to learn something new every day, and a way to connect to people.  I like to think of it as a learning chain, a relay where we all pass knowledge from one to another. Never Ending Journey When I started the blog, I thought I would write for a few days and stop, but now after seven years I haven’t stopped and I have no intention of stopping!  However, change happens, and for this blog it will start today.  This blog started as a single resource for SQL Server, but now it has grown beyond, to Sharepoint, Personal Development, Developer Training, MySQL, Big Data, and lots of other things.  Truly speaking, this blog is more than just SQL Server, and that was always my intention.  I named it “SQL Authority,” not “SQL Server Authority”!  Loudly and clearly, I would like to announce that I am going to go back to my roots and start writing more about SQL, more about big data, and more about the other technology like relational databases, MySQL, Oracle, and others.  My goal is not to become a comprehensive resource for every technology, my goal is to learn something new every day – and now it can be so much more than just SQL Server.  I will learn it, and post it here for you. I have written a very long post on this anniversary, but here is the summary: Thank You.  You all have been wonderful.  Seven years is a long journey, and it makes me emotional.  I have been “with” this blog before I met my wife, before we had our daughter.  This blog is like a fourth member of the family.  Keep reading, keep commenting, keep supporting.  Thank you all. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: About Me, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Oracle Dojos: Oracle Wissen - Kompakt und Kostenlos

    - by Anne Manke
    Die kleinen, roten Oracle-DOJOS geben einen kurzen und kompakten Überblick über Oracle Technologien. Jedes Dojo greift dabei ein spezielles Themengebiet auf. Ob zum Thema Enterprise Manager 12c und Cloud Control oder Big Data Einführung, die Oracle Dojos geben Informationen über Theorie und Praxis, Tipps und Hinweise - alles in handlicher Form.  Die Dojos gibt es entweder online als PDF oder als kleine Büchlein (siehe Foto). Die bekommen Sie unseren Oracle-Veranstaltungen oder direkt bei uns. Holen Sie sich Ihr eigenes Dojo und rufen Sie uns noch heute an! Übrigens: "Dojo" ist japanisch und bedeutet Übungsraum oder Übungshalle. Und so wollen die Dojos auch gesehen werden: als Übungsbuch zur Erweiterung und Verbesserung des eigenen Wissens und der eigenen Fähigkeiten.     

    Read the article

  • Links from UK TechDays 2010 sessions on Entity Framework, Parallel Programming and Azure

    - by Eric Nelson
    [I will do some longer posts around my sessions when I get back from holiday next week] Big thanks to all those who attended my 3 sessions at TechDays this week (April 13th and 14th, 2010). I really enjoyed both days and watched some great session – my personal fave being the Silverlight/Expression session by my friend and colleague Mike Taulty. The following links should help get you up and running on each of the technologies. Entity Framework 4 Entity Framework 4 Resources http://bit.ly/ef4resources Entity Framework Team Blog http://blogs.msdn.com/adonet Entity Framework Design Blog http://blogs.msdn.com/efdesign/ Parallel Programming Parallel Computing Developer Center http://msdn.com/concurrency Code samples http://code.msdn.microsoft.com/ParExtSamples Managed Team Blog http://blogs.msdn.com/pfxteam Tools Team Blog http://blogs.msdn.com/visualizeparallel My code samples http://gist.github.com/364522  And PDC 2009 session recordings to watch: Windows Azure Platform UK Site http://bit.ly/landazure UK Community http://bit.ly/ukazure (http://ukazure.ning.com ) Feedback www.mygreatwindowsazureidea.com Azure Diagnostics Manager - A client for Windows Azure Diagnostics Cloud Storage Studio - A client for Windows Azure Storage SQL Azure Migration Wizard http://sqlazuremw.codeplex.com

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >