Search Results

Search found 59959 results on 2399 pages for 'time complexity'.

Page 385/2399 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Valid concerns over shared developer team

    - by alphadogg
    Say your company is committed (and don't want to consider not doing this) to sharing/pooling a development team across a handful of business units. What would you setup as concerns/expectations that must be cleared before doing this? For example: Need agreement between units on how much actual time (FTE) is allocated to each unit Need agreement on scheduling of staff need agreement on request procedure if extra time is required by one party etc... Have you been in a situation like this as a manager of one unit destined to use this? If so, what were problems you experienced? What would you have or did implement? Same if you were the manager of the shared team. Please assume, for discussion, that the people concerned know that you can swap devs in and out on a whim. I don't want to know the disadvantages of this approach; I know them. I want to anticipate issues and know how to mitigate the fallout.

    Read the article

  • Access-based Enumeration (December 04, 2009)

    - by user12612012
    Access-based Enumeration (ABE) is another recent addition to the Solaris CIFS Service - delivered into snv_124.  Designed to be compatible with Windows ABE, which was introduced in Windows Server 2003 SP1, this feature filters directory content based on the user browsing the directory.  Each user can only see the files and directories to which they have access.  This can be useful to implement an out-of-sight, out-of-mind policy or simply to reduce the number of files presented to each user - to make it easier to find files in directories containing a large number of files. ABE is managed on a per share basis by a new boolean share property called, as you might imagine, abe, which is described insharemgr(1M).  When set to true, ABE filtering is enabled on the share and directory entries to which the user has no access will be omitted from directory listings returned to the client.  When set to false or not defined, ABE filtering will not be performed on the share.  The abe property is not defined by default.Administration is straightforward, for example: # zfs sharesmb=abe=true,name=jane tank/home/jane# sharemgr show -vp    zfs       zfs/tank/home/jane nfs=() smb=()          jane=/export/home/jane     smb=(abe="true") ABE is also supported via sharemgr(1M) and on smbautohome(4) shares. Note that even though a file is visible in a share, with ABE enabled, it doesn't automatically mean that the user will always be able to open the file.  If a user has read attribute access to a file ABE will show the it but access will be denied if this user tries to open the file for reading or writing. We considered supporting ABE on NFS shares, as suggested by the name of PSARC/2009/375, but we ran into problems due to NFS client readdir caching.  NFS clients maintain a common directory entry cache for all users, which not only defeats the intent of ABE but can lead to very confusing results.  If multiple users are looking at the content of a directory with ABE enabled, the entries that get cached will depend on who looks at the directory first.  Subsequent users may see files that ABE on the server would have filtered out or files may be missing because they were filtered out for the original user. Although this issue can be resolved by disabling the NFS client readdir cache, this was deemed to be an unsuitable solution because it would create a dependency between a server share property and the configuration on all NFS clients, and there was the potential for differences in behavior across the various NFS clients.  It just seemed to add unnecessary administration complexity so we pulled it out. References for more information PSARC/2009/246 ZFS support for Access Based Enumeration PSARC/2009/375 ABE share property for NFS and SMB 6802734 Support for Access Based Enumeration 6802736 SMB share support for Access Based Enumeration Windows Access-based Enumeration

    Read the article

  • Rosegarden don't work

    - by Brallan Aguilar
    I recently installed Rosegarden. When it had ran for the first time, it worked excellent. After, I had to restart my computer and when I ran it for the second time, something strange was occurred: according to the System Monitor, Rosegarden was running, in spite of I couldn't see the application, sometimes it's opened at other desktop and also Unity always has some instability problems like disappear the title bar of other programs or a zone is marked (for example, when we drag a window to the right side, we can see an "orange" area and the window's size is reduced to distribute at the middle of the desktop) at the left side of the screen and isn't hidden.

    Read the article

  • What terminal emulators are available for heavy terminal users?

    - by Noah Goodrich
    I spend a lot of time at the command-line during the workday and at home too since I run Ubuntu exclusively. I've been using the default gnome terminal but I've reached a point where I'd really like to get my terminal tricked out so that my common tasks are as easy as possible. Specifically, I find that I spend of lot of time browsing code in the terminal and working in config files. On my wish list would be: Ability to have multiple screens, tabs, windows (I don't have a preference at this point) that I can easily switch between. Color coding for everything Easy to modify the aesthetics of the terminal (is it vain to want my terminal to look nice?) such as transparency, borders, etc.

    Read the article

  • Best practices for managing deployment of code from dev to production servers?

    - by crosenblum
    I am hoping to find an easy tool or method, that allow's managing our code deployment. Here are the features I hope this solution has: Either web-based or batch file, that given a list of files, will communicate to our production server, to backup those files in different folders, and zip them and put them in a backup code folder. Then it records the name, date/time, and purpose of the deployment. Then it sends the files to their proper spot on the production server. I don't want too complex an interface to doing the deployment's because then they might never use it. Or is what I am asking for too unrealistic? I just know that my self-discipline isn't perfect, and I'd rather have a tool I can rely on to do what needs to be done, then my own memory of what exact steps I have to take every time. How do you guys, make sure everything get's deployed correctly, and have easy rollback in case of any mistakes?

    Read the article

  • Patterns for Handling Changing Property Sets in C++

    - by Bhargav Bhat
    I have a bunch "Property Sets" (which are simple structs containing POD members). I'd like to modify these property sets (eg: add a new member) at run time so that the definition of the property sets can be externalized and the code itself can be re-used with multiple versions/types of property sets with minimal/no changes. For example, a property set could look like this: struct PropSetA { bool activeFlag; int processingCount; /* snip few other such fields*/ }; But instead of setting its definition in stone at compile time, I'd like to create it dynamically at run time. Something like: class PropSet propSetA; propSetA("activeFlag",true); //overloading the function call operator propSetA("processingCount",0); And the code dependent on the property sets (possibly in some other library) will use the data like so: bool actvFlag = propSet["activeFlag"]; if(actvFlag == true) { //Do Stuff } The current implementation behind all of this is as follows: class PropValue { public: // Variant like class for holding multiple data-types // overloaded Conversion operator. Eg: operator bool() { return (baseType == BOOLEAN) ? this->ToBoolean() : false; } // And a method to create PropValues various base datatypes static FromBool(bool baseValue); }; class PropSet { public: // overloaded[] operator for adding properties void operator()(std::string propName, bool propVal) { propMap.insert(std::make_pair(propName, PropVal::FromBool(propVal))); } protected: // the property map std::map<std::string, PropValue> propMap; }; This problem at hand is similar to this question on SO and the current approach (described above) is based on this answer. But as noted over at SO this is more of a hack than a proper solution. The fundamental issues that I have with this approach are as follows: Extending this for supporting new types will require significant code change. At the bare minimum overloaded operators need to be extended to support the new type. Supporting complex properties (eg: struct containing struct) is tricky. Supporting a reference mechanism (needed for an optimization of not duplicating identical property sets) is tricky. This also applies to supporting pointers and multi-dimensional arrays in general. Are there any known patterns for dealing with this scenario? Essentially, I'm looking for the equivalent of the visitor pattern, but for extending class properties rather than methods. Edit: Modified problem statement for clarity and added some more code from current implementation.

    Read the article

  • Intel NUC Video Blur

    - by donopj2
    I recently purchased the D34010WYKH NUC and I figured this would be a great time to make the jump to a Linux based system. I'm running Ubuntu 14.04, and I'm having an issue with video rendering that is driving me mad. Essentially videos (all 1080p mkv files) appear to be slowly blurring, and its most noticeable when the camera remains on a scene for a long period of time. Then all of a sudden the video will correct the blur and the image will be sharp, only to begin happening again followed by more sudden and noticeable corrections. I have seen the exact same issue in both VLC and XBMC and across several different videos. I have installed the latest Intel graphics drivers, and searched the web but to be honest I'm not sure how to accurately describe this problem. I'm also quite new to the OS, so my experience tinkering is limited. Has anyone experienced this type of issue before? Can it be resolved?

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • Live examples of the Windows Azure Platform running Java and Ruby on Rails

    - by Eric Nelson
    At QCon in March we had a booth focused on interoperability out of which came the idea to create an application implemented in both Java and Ruby on Rails, running on top of the Windows Azure Platform. Nothing fancy, just an application to capture attendees view on Microsoft and Interoperability. It was implemented by Active Web Solutions, long time fans of Azure. Wroth a quick squint :-) Check out the related links below for info to get you up and running. Check out The Java version http://ukinterop.cloudapp.net/  The Ruby on Rails version http://rubyukinterop.cloudapp.net/ (run out of time to finish this one) Related Links: Tomcat Solution Accelerator http://code.msdn.microsoft.com/winazuretomcat AzureRunMe http://azurerunme.codeplex.com/ UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • Java SE 7 Developer Preview Release - Download Now!

    - by ruma.sanyal
    The JDK7 Developer Preview Release is now available for rigorous community testing. But time is running out! The latest build is feature complete, stable and ready to roll - so download, test and report bugs now. Let us know what you think. If you report a bug in the JDK 7 developer preview before April 4th, the Java product team will sing your praises on the Java SE 7 Honor Role. PLUS... we will send you some Java swag. We'll read, evaluate, and act on all feedback received via the usual bug-reporting channel. Bugs reported later on might not get ?xed in time for the initial release, so if you want to be a contributor to Java SE 7 do it before the April deadline.

    Read the article

  • How to make an arc'd, but not mario-like jump in python, pygame [duplicate]

    - by PythonInProgress
    This question already has an answer here: Arc'd jumping method? 2 answers Analysis of Mario game Physics [closed] 6 answers I have looked at many, many questions similar to this, and cannot find a simple answer that includes the needed code. What i am trying to do is raise the y value of a square for a certain amount of time, then raise it a bit more, then a bit more, then lower it twice. I cant figure out how to use acceleration/friction, and might want to do that too. P.S. - can someone tell me if i should post this on stackoverflow or not? Thanks all! Edit: What i am looking for is not mario-like physics, but a simple equation that can be used to increase then decrease height over the time over a few seconds.

    Read the article

  • Networking Client Server Packet logic (How they communicate)

    - by Trixmix
    I want to know what is the logic behind server client communication through packets for a real time game. for example the server sends x packets then the client receives x packets and processes them.. Basically what is the process to keep the client and server in sync and able to receive and send packets. more in depth example of what I want to know: client step 1 wait for a packet step 2 read x packets step 3 process x packets step 4 send x packets and so on... I need to know the very basic outline of the communication. Big questions are: 1) do I send and read packets all at one time? i.e for loop though the incoming packets array list and read them all or one every server loop or what... 2) what order should I do things i.e first receive then read then process then send etc.. 3) what I asked above a step by step of what the server / client should do.. Thanks!

    Read the article

  • White Paper on Parallel Processing

    - by Andrew Kelly
      I just ran across what I think is a newly published white paper on parallel  processing in SQL Server 2008 R2. The date is October 2010 but this is the first time I have seen it so I am not 100% sure how new it really is. Maybe you have seen it already but if not I recommend taking a look. So far I haven’t had time to read it extensively but from a cursory look it appears to be quite informative and this is one of the areas least understood by a lot of dba’s. It was authored by Don Pinto...(read more)

    Read the article

  • Solving Inbound Refinery PDF Conversion Issues, Part 1

    - by Kevin Smith
    Working with Inbound Refinery (IBR)  and PDF Conversion can be very frustrating. When everything is working smoothly you kind of forgot it is even there. Documents are cheeked into WebCenter Content (WCC), sent to IBR for conversion, converted to PDF, returned to WCC, and viola your Office documents have a nice PDF rendition available for viewing. Then a user checks in a bunch of password protected Word files, the conversions fail, your IBR queue starts backing up, users start calling asking why their document have not been released yet, and your spend a frustrating afternoon trying to recover and get things back running properly again. Password protected documents are one cause of PDF conversion failures, and I will cover those in a future blog post, but there are many other problems that can cause conversions to fail, especially when working with the WinNativeConverter and using the native applications, e.g. Word, to convert a document to PDF. There are other conversion options like PDFExportConverter which uses Oracle OutsideIn to convert documents directly to PDF without the need for the native applications. However, to get the best fidelity to the original document the native applications must be used. Many customers have tried PDFExportConverter, but have stayed with the native applications for conversion since the conversion results from PDFExportConverter were not as good as when the native applications are used. One problem I ran into recently, that at least has a easy solution, are Word documents that display a Show Repairs dialog when the document is opened. If you open the problem document yourself you will see this dialog. This will cause the conversion to time out. Any time the native application displays a dialog that requires user input the conversion will time out. The solution is to set add a setting for BulletProofOnCorruption to the registry for the user running Word on the IBR server. See this support note from Microsoft for details. The support note says to set the registry key under HKEY_CURRENT_USER, but since we are running IBR as a service the correct location is under HKEY_USERS\.DEFAULT. Also since in our environment we were using Office 2007, the correct registry key to use was: HKEY_USERS\.DEFAULT\Software\Microsoft\Office\11.0\Word\Options Once you have done this restart the IBR managed server and resubmit your problem document. It should now be converted successfully. For more details on IBR see the Oracle® WebCenter Content Administrator's Guide for Conversion.

    Read the article

  • Dynamic character animation - Using the physics engine or not

    - by Lex Webb
    I'm planning on building a dynamic reactant animation engine for the characters in my 2D Game. I have already built templates for a skeleton based animation system using key frames and interpolation to specify a limbs position at any given moment in time. I am using Farseer physics (an extension of Box2D) in Monogame/XNA in C# My real question lies in how i go about tying this character animation into the physics engine. I have two options: Moving limbs using physics engine - applying a interpolated force to each limb (dynamic body) in order to attempt to get it to its position as donated by the skeleton animation. Moving limbs by simply changing the position of a fixed body - Updating the new position of each limb manually, attempting to take into account physics collisions. Then stepping the physics after the animation to allow for environment interaction. Each of these methods have their distinct advantages and disadvantages. Physics based movement Advantages: Possibly more natural/realistic movement Better interaction with game objects as force applying to objects colliding with characters would be calculated for me. No need to convert to dynamic bodies when reacting to projectiles/death/fighting. Disadvantages: Possible difficulty in calculating correct amount of force to move a limb a certain distance at a constant rate. Underlying character balance system would need to be created that would need to be robust enough to prevent characters falling over at the touch of a feather. Added code complexity and processing time for the above. Static Object movement Advantages: Easy to interpolate movement of limbs between game steps Moving limbs is as simple as applying a rotation to the skeleton bone. Greater control over limbs, wont need to worry about characters falling over as all animation would be pre-defined. Disadvantages: Possible unnatural movement (Depends entirely on my animation skills!) Bad physics collision reactions with physics engine (Dynamic bodies simply slide out of the way of static objects) Need to calculate collisions with physics objects and my limbs myself and apply directional forces to them. Hard to account for slopes/stairs/non standard planes when animating walking/running animations. Need to convert objects to dynamic when reacting to projectile/fighting/death physics objects. The Question! As you can see, i have thought about this extensively, i have also had Google into physics based animation and have found mostly dissertation papers! Which is filling me with sense that it may a lot more advanced than my mathematics skills. My question is mostly subjective based on my findings above/any experience you may have: Which of the above methods should i use when creating my game? I am willing to spend the time to get a physics solution working if you think it would be possible. In the end i want to provide the most satisfying experience for the gamer, as well as a robust and dynamic system i can use to animate pretty much anything i need.

    Read the article

  • Where Facebook Stands Heading Into 2013

    - by Mike Stiles
    In our last blog, we looked at how Twitter is positioned heading into 2013. Now it’s time to take a similar look at Facebook. 2012, for a time at least, seemed to be the era of Facebook-bashing. Between a far-from-smooth IPO, subsequent stock price declines, and anxiety over privacy, the top social network became a target for comedians, politicians, business journalists, and of course those who were prone to Facebook-bash even in the best of times. But amidst the “this is the end of Facebook” headlines, the company kept experimenting, kept testing, kept innovating, and pressing forward, committed as always to the user experience, while concurrently addressing monetization with greater urgency. Facebook enters 2013 with over 1 billion users around the world. Usage grew 41% in Brazil, Russia, Japan, South Korea and India in 2012. In the Middle East and North Africa, an average 21 new signups happen per minute. Engagement and time spent on the site would impress the harshest of critics. Facebook, while not bulletproof, has become such an integrated daily force in users’ lives, it’s getting hard to imagine any future mass rejection. You want to see a company recognizing weaknesses and shoring them up. Mobile was a weakness in 2012 as Facebook was one of many caught by surprise at the speed of user migration to mobile. But new mobile interfaces, better mobile ads, speed upgrades, standalone Messenger and Pages mobile apps, and the big dollar acquisition of Instagram, were a few indicators Facebook won’t play catch-up any more than it has to. As a user, the cool thing about Facebook is, it knows you. The uncool thing about Facebook is, it knows you. The company’s walking a delicate line between the public’s competing desires for customized experiences and privacy. While the company’s working to make privacy options clearer and easier, Facebook’s Paul Adams says data aggregation can move from acting on what a user is engaging with at the moment to a more holistic view of what they’re likely to want at any given time. To help learn about you, there’s Open Graph. Embedded through diverse partnerships, the idea is to surface what you’re doing and what you care about, and help you discover things via your friends’ activities. Facebook’s Director of Engineering, Mike Vernal, says building mobile social apps connected to Facebook in such ways is the next wave of big innovation. Expect to see that fostered in 2013. The Facebook site experience is always evolving. Some users like that about Facebook, others can’t wait to complain about it…on Facebook. The Facebook focal point, the News Feed, is not sacred and is seeing plenty of experimentation with the insertion of modules. From upcoming concerts, events, suggested Pages you might like, to aggregated “most shared” content from social reader apps, plenty could start popping up between those pictures of what your friends had for lunch.  As for which friends’ lunches you see, that’s a function of the mythic EdgeRank…which is also tinkered with. When Facebook changed it in September, Page admins saw reach go down and the high anxiety set in quickly. Engagement, however, held steady. The adjustment was about relevancy over reach. (And oh yeah, reach was something that could be charged for). Facebook wants users to see what they’re most likely to like, based on past usage and interactions. Adding to the “cream must rise to the top” philosophy, they’re now even trying out ordering post comments based on the engagement the comments get. Boy, it’s getting competitive out there for a social engager. Facebook has to make $$$. To do that, they must offer attractive vehicles to marketers. There are a myriad of ad units. But a key Facebook marketing concept is the Sponsored Story. It’s key because it encourages content that’s good, relevant, and performs well organically. If it is, marketing dollars can amplify it and extend its reach. Brands can expect the rollout of a search product and an ad network. That’s a big deal. It takes, as Open Graph does, the power of Facebook’s user data and carries it beyond the Facebook environment into the digital world at large. No one could target like Facebook can, and some analysts think it could double their roughly $5 billion revenue stream. As every potential revenue nook and cranny is explored, there are the users themselves. In addition to Gifts, Facebook thinks users might pay a few bucks to promote their own posts so more of their friends will see them. There’s also word classifieds could be purchased in News Feeds, though they won’t be called classifieds. And that’s where Facebook stands; a wildly popular destination, a part of our culture, with ever increasing functionalities, the biggest of big data, revenue strategies that appeal to marketers without souring the user experience, new challenges as a now public company, ongoing privacy concerns, and innovations that carry Facebook far beyond its own borders. Anyone care to write a “this is the end of Facebook” headline? @mikestilesPhoto via stock.schng

    Read the article

  • The best Windows 7 virtual desktop tool by far&hellip; Dexpot

    - by Eric Nelson
    [Oh – and Windows XP, Vista etc] Every so often I yearn for the virtual desktop functionality that is implemented so well under Linux. Unfortunately every time I start looking for a great tool for Windows I ultimately end up disappointed. But … I think this time around I have actually found one that will outlast the first day or two and become a must have. Check out http://www.dexpot.de/ So far this is 100% stable, 100% sensible and offers awesome functionality, yet still is very simple to use. There is a detailed look at the many features on the site but a couple that do it for me: Desktop Manager and next/previous tray icons make it easy to navigate around: Announcement of Desktop as a desktop takes focus: And best of all, Windows 7 preview integration And… it is FREE for private use and you get 30 days to try it out for professional use (e.g. me)

    Read the article

  • Thank you to all entrants! Finalist announcement coming soon...

    - by Rebecca Amos
    We had a fantastic response to this year's Exceptional DBA Awards. A big thank you to everyone who took the time and effort to make a nomination - it's great to see so many DBAs being appreciated for the hard work that they do. We're now busy collating the answers to send off to the Exceptional DBA judges. They'll pick their five finalists, which we'll be announcing in a few weeks’ time. So watch this space for further details. In the meantime, don't forget you can still download your free resources from the Exceptional DBA Award website. You can use them for your own career and personal development; pass them on to a great DBA you know, or to start planning your entry for next year!

    Read the article

  • What is the best free program to burn DVD movies from DVD movie files that are present on the hard drive the way ImgBurn does it in Windows?

    - by cipricus
    Evaluating burning programs is difficult: it takes more time, while errors are very unpleasant. I have looked at AcetoneISO, Brasero, and K3B (which many recommend) but they seem more limited than Nero, CDBurnXP and especially ImgBurn from Windows. They all seem to work (did not tested them fully myself) but at a more basic level (create ISO, burn them, etc.). When it comes to something like creating a DVD movie disk from DVD movie files present on the hard drive, it seems to me that I would have to create the ISO first and then burn it, which involves using more hard drive space and more time. Looking in the Windows direction, there is a Nero for Linux. It costs money. ImgBurn is said to work well in Wine, but as yet I have not tested it enough. What other options are there?

    Read the article

  • Auto-tiling with Yoshi's Island style tiles

    - by Boreal
    I'm creating a 2D platformer and I'd like to implement an auto-tiling system. Normally, this wouldn't be particularly difficult. However, I'd like to have tiles like in Yoshi's Island, where the graphics extend past the actual collidable tile's boundaries. Consider this image: Although the eggs and the Piranha Plant are clearly resting on the ground, the flower tiles continue behind them, out of the collidable tile. I know that it would be simple to do by hand, but extremely time consuming. Using an auto-tiling algorithm would save me a lot of time and boredom, but I'm not sure where to start.

    Read the article

  • Pls help Installation has ruined laptop

    - by user287694
    Hi this was my time using Linux. So I installed Ubuntu from Windows 7 via USB. After booting into Ubuntu I proceeded to do a full install over windows. This seemed to be going fine but at the point where I was to sign up to Ubuntu one it froze. After awhile maybe 30 mins I realised nothing was going to happen, so I restarted. Now after the bios options bit passes all I get is a prompt flashing at the top left of a black screen. I have tried booting up via USB option but nothing happens, I have tried this 6 or 7 times with fresh usb each time and also with lubuntu. I'm not sure if I have correctly made a live usb obviously I believe I have been but being new I can't guarantee this. Can anyone help me get some functional out of my laptop. Thanks Ben

    Read the article

  • 25 Secrets for Faster ASP.NET: the Eagle has landed!

    - by Michaela Murray
    On Friday we launched our new free eBook, 25 Secrets for Faster ASP.NET Applications! Heading for 1000 of you have picked it up already, but if you haven’t got your copy yet, you can grab it from http://www.red-gate.com/25secrets. It’s the follow up to the wildly successful 50 Ways to Avoid, Find and Fix ASP.NET Performance Issues, which we released back in January this year (you can download from www.red-gate.com/50ways). Once again, we collected tips from some of the smartest brains in the ASP.NET community, but this time around, we’ve covered the latest stuff in the .NET framework – async/await, Web API, and more. Houston, we have a winner… In my original blogpost, I offered a Microsoft Surface as a prize for the best tip. Now, after some serious deliberation, our judges have settled on a winner. By a unanimous verdict, the prize goes to… (wait for it!) … Jeffrey Richter, for this cheeky number, Tip #1 in the new book: Want to build scalable websites and services? Work asynchronously One of the secrets to producing scalable websites and services is to perform all your I/O operations asynchronously to avoid blocking threads. When your thread issues a synchronous I/O request, the Windows kernel blocks the thread. This causes the thread pool to create a new thread, which allocates a lot of memory and wastes precious CPU time. Calling xxxAsync method and using C#’s async/await keywords allows your thread to return to the thread pool so it can be used for other things. This reduces the resource consumption of your app, allowing it to use more memory and improving response time to your clients. Congratulations Jeffrey! Of course, I also owe a massive thank you to everyone who’s been involved in the book, especially all the authors. It’s a real treat to work with a developer community that’s so keen to collaborate and to share their hard-won nuggets of performance knowhow. If you haven’t read it yet, I can’t recommend it highly enough. You can get it for free at www.red-gate.com/25secrets The full backstory for both eBooks: https://www.simple-talk.com/blogs/2012/11/15/application-performance-the-best-of-the-web/ https://www.simple-talk.com/blogs/2012/11/27/application-performance-episode-2-announcing-the-judges/ https://www.simple-talk.com/blogs/2013/01/25/free-ebook-50-ways-to-avoid-find-and-fix-asp-net-performance-issues/ https://www.simple-talk.com/blogs/2013/03/22/50-ways-to-avoid-find-and-fix-asp-net-performance-issues-the-next-generation/

    Read the article

  • What makes a good developer / system documentation?

    - by deamon
    Much time is wasted to get new developers started with existing software systems, because there is no good documenation. But what makes a system documentation good? One thing is a good API documentation like the Java API doc, but how to transfer the "bigger picture" and other things that cannot be placed in the API doc? One constraint is that it should not be to hard and time consuming to write the docs, because that is one reason why it is omitted so often. So, what makes documentation good?

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >