Search Results

Search found 11553 results on 463 pages for 'bad programmer'.

Page 275/463 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • wynapse.com down last night, SC tonight

    - by Dave Campbell
    In an industry segment that nobody is ever 'asleep', I suppose no time is a good time to take SQL Server down for upgrades, and I had forgotten that my host was going to be doing that. Last night about 9pm (Arizona), in the middle of working on a blog post, things started going wonky and I finally realized everything was ok except for SQL Server. I turned in a ticket on it and was reminded about the maintenance schedule... guess I file those away in memory and just assume they'll happen while I'm asleep :) So, looking at the schedule, it appears that SQL Server for SilverlightCream is going to be down tonight. Minimum is 9-12pm Arizona time... mileage and time may vary. Since all the posts are run through SC to get the Skim count, having SQL down sucks, but I'd rather we got maintenance than have to react to a crash because of something that wasn't maintained. I'll try to get the next 'Cream post out early so that the bulk of folks can dig through it prior to the outage. Meanwhile, for those of you in Phoenix, tonight is our Phoenix Silverlight User Group April meeting, and Joel Neubeck is going to be giving us the run-through on Windows Phone 7! We're not as advanced as those MVP rock-stars in California like Victor Gaudioso who streams his user group meeting, so you'll just have to show up for the goodness! And for anyone that's interested, here's some WP7 bling for your desktop... I want some of this real bad for my laptop! Get the full image in the post by Ozymandias:

    Read the article

  • Why I love NUnit, NCover, CC Nant and friends

    - by gregarobinson
    I have used these opensource tools on past projects in different stages, but never all of them at once. I am on a project now where there is a build server, Subversion, Nant, NUnit with 100% NCover required coverage, CrusieControl, CCTray and Rhino Mockc.I was extending an Interface and concrete class in a solution I had never worked on before today. Automatic builds were turned off for the day for a special case QA test. I added my new members to the Interface, implemented them in the concrete class, did a local build, tested, all looked good, so I did a Subversion Update then Commit.  Around 4:30PM the automatic builds were turned back on. Right away the build failed for less than 100% code coverage on my last Commit. Turns out there was a project in the solution I modified that had numerous NUnit tests on the Interface/Concrete class I modified, 3 of which now failed. Now that is cool..of course i was frustrated as i wanted to go home..but..I did a bad thing..I did not run nant on the source prior to my Commit. Lesson learned, and a great lesson at that!   

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • How to protect a peer-to-peer network from inappropriate content?

    - by Mike
    I’m developing a simple peer-to-peer app in .Net which should enable users to share specific content (text and picture files). As I've learned with my last question, inappropriate content can “relatively” easily be identified / controlled in a centralized environment. But what about a peer-to-peer network, what are the best methods to protect a decentralized system from unwanted (illegal) content? At the moment I only see the following two methods: A protocol (a set of rules) defines what kind of data (e.g. only .txt and jpg-files, not bigger than 20KB etc.) can be shared over the p2p-network and all clients (peers) must implement this protocol. If a peer doesn’t, it gets blocked by other peers. Pro: easy to implement. Con: It’s not possible to define the perfect protocol (I think eMail-Spam filters have the same problem) Some kind of rating/reputation system must be implemented (similar to stackoverflow), so “bad guys” and inappropriate content can be identified / blocked by other users. Pro: Would be very accurate. Con: Would be slow and in my view technically very hard to implement. Are there other/better solutions? Any answer or comment is highly appreciated.

    Read the article

  • Smarphone Apps. music, licenses and fees .. nightmare

    - by mm24
    I have recently asked a question about music in games like Guitar Hero. I have found that that in Europe (at least) if I do want to use a track composed by a musician member of a royalty collecting society I need to pay a flat fee to the society and not only to the member. So a "one-to-one" agreement is not valid and the society can come up to me and ask me for money for each download. Even if for FREE! This is a fee sheet list of the UK agency: for fee, see "Permanent download services" It is about 1,200 GBP for less than 22,000 copies and they DON'T specify anything more and they said me on the phone that I need to wait and see how many downloads I get before knowing the price. This is kind of crazy as If I give away the App for free I will have to PAY 1,200 GBP!! I am shocked and I feel very bad. One agency suggested me to use a fake name of the artist, but in this way is not fair to my collaborators as what they hope is that the App gets lots of downloads and in this way that other people will get to know about them and hopefully commission them more work. The other solution is to work only with non registered musicians. The question here to you is.. has anyone found a legal way to do use music from registered authors in a game?

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • Day 6 - Game Menuing Woes and Future Screen Sneak Peeks

    - by dapostolov
    So, after my last post on Day 5 I dabbled with my game class design. I took the approach where each game objects is tightly coupled with a graphic. The good news is I got the menu working but not without some hard knocks and game growing pains. I'll explain later, but for now...here is a class diagram of my first stab at my class structure and some code...   Ok, there are few mistakes, however, I'm going to leave it as is for now... As you can see I created an inital abstract base class called GameSprite. This class when inherited will provide a simple virtual default draw method:        public virtual void DrawSprite(SpriteBatch spriteBatch)         {             spriteBatch.Draw(Sprite, Position, Color.White);         } The benefits of coding it this way allows me to inherit the class and utilise the method in the screen draw method...So regardless of what the graphic object type is it will now have the ability to render a static image on the screen. Example: public class MyStaticTreasureChest : GameSprite {} If you remember the window draw method from Day 3's post, we could use the above code as follows...         protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             foreach(var gameSprite in ListOfGameObjects)            {                 gameSprite.DrawSprite(spriteBatch);            }             spriteBatch.End();             base.Draw(gameTime);         } I have to admit the GameSprite object is pretty plain as with its DrawSprite method... But ... we now have the ability to render 3 static menu items on the screen ... BORING! I want those menu items to do something exciting, which of course involves animation... So, let's have a peek at AnimatedGameSprite in the above game diagram. The idea with the AnimatedGameSprite is that it has an image to animate...such as ... characters, fireballs, and... menus! So after inheriting from GameSprite class, I added a few more options such as UpdateSprite...         public virtual void UpdateSprite(float elapsed)         {             _totalElapsed += elapsed;             if (_totalElapsed > _timePerFrame)             {                 _frame++;                 _frame = _frame % _framecount;                 _totalElapsed -= _timePerFrame;             }         }  And an overidden DrawSprite...         public override void DrawSprite(SpriteBatch spriteBatch)         {             int FrameWidth = Sprite.Width / _framecount;             Rectangle sourcerect = new Rectangle(FrameWidth * _frame, 0, FrameWidth, Sprite.Height);             spriteBatch.Draw(Sprite, Position, sourcerect, Color.White, _rotation, _origin, _scale, SpriteEffects.None, _depth);         } With these two methods...I can animate and image, all I had to do was add a few more lines to the screens Update Method (From Day 3), like such:             float elapsed = (float) gameTime.ElapsedGameTime.TotalSeconds;             foreach (var item in ListOfAnimatedGameObjects)             {                 item.UpdateSprite(elapsed);             } And voila! My images begin to animate in one spot, on the screen... Hmm, but how do I interact with the menu items using a mouse...well the mouse cursor was easy enough... this.IsMouseVisible = true; But, to have it "interact" with an image was a bit more tricky...I had to perform collision detection!             mouseStateCurrent = Mouse.GetState();             var uiEnabledSprites = (from s in menuItems                                    where s.IsEnabled                                    select s).ToList();             foreach (var item in uiEnabledSprites)             {                 var r = new Rectangle((int)item.Position.X, (int)item.Position.Y, item.Sprite.Width, item.Sprite.Height);                 item.MenuState = MenuState.Normal;                 if (r.Intersects(new Rectangle(mouseStateCurrent.X, mouseStateCurrent.Y, 0, 0)))                 {                     item.MenuState = MenuState.Hover;                     if (mouseStatePrevious.LeftButton == ButtonState.Pressed                         && mouseStateCurrent.LeftButton == ButtonState.Released)                     {                         item.MenuState = MenuState.Pressed;                     }                 }             }             mouseStatePrevious = mouseStateCurrent; So, basically, what it is doing above is iterating through all my interactive objects and detecting a rectangle collision and the object , plays the state animation (or static image).  Lessons Learned, Time Burned... So, I think I did well to start, but after I hammered out my prototype...well...things got sloppy and I began to realise some design flaws... At the time: I couldn't seem to figure out how to open another window, such as the character creation screen Input was not event based and it was bugging me My menu design relied heavily on mouse input and I couldn't use keyboard. Mouse input, is tightly bound with graphic rendering / positioning, so its logic will have to be in each scene. Menu animations would stop mid frame, then continue when the action occured again. This is bad, because...what if I had a sword sliding onthe screen? Then it would slide a quarter of the way, then stop due to another action, then render again mid-slide... it just looked sloppy. Menu, Solved!? To solve the above problems I did a little research and I found some great code in the XNA forums. The one worth mentioning was the GameStateManagementSample. With this sample, you can create a basic "text based" menu system which allows you to swap screens, popup screens, play the game, and quit....basic game state management... In my next post I'm going to dwelve a bit more into this code and adapt it with my code from this prototype. Text based menus just won't cut it for me, for now...however, I'm still going to stick with my animated menu item idea. A sneak peek using the Game State Management Sample...with no changes made... Cool Things to Mention: At work ... I tend to break out in random conversations every-so-often and I get talking about some of my challenges with this game (or some stupid observation about something... stupid) During one conversation I was discussing how I should animate my images; I explained that I knew I had to use the Update method provided, but I didn't know how (at the time) to render an image at an appropriate "pace" and how many frames to use, etc.. I also got thinking that if a machine rendered my images faster / slower, that was surely going to f-up my animations. To which a friend, Sheldon,  answered, surely the Draw method is like a camera taking a snapshot of a scene in time. Then it clicked...I understood the big picture of the game engine... After some research I discovered that the Draw method attempts to keep a framerate of 60 fps. From what I understand, the game engine will even leave out a few calls to the draw method if it begins to slow down. This is why we want to put our sprite updates in the update method. Then using a game timer (provided by the engine), we want to render the scene based on real time passed, not framerate. So even the engine renders at 20 fps, the animations will still animate at the same real time speed! Which brings up another point. Why 60 fps? I'm speculating that Microsoft capped it because LCD's dont' refresh faster than 60 fps? On another note, If the game engine knows its falling behind in rendering...then surely we can harness this to speed up our games. Maybe I can find some flag which tell me if the game is lagging, and what the current framerate is, etc...(instead of coding it like I did last time) Sheldon, suggested maybe I can render like WoW does, in prioritised layers...I think he's onto something, however I don't think I'll have that many graphics to worry about such a problem of graphic latency. We'll see. People to Mention: Well,as you are aware I hadn't posted in a couple days and I was surprised to see a few emails and messenger queries about my game progress (and some concern as to why I stopped). I want to thank everyone for their kind words of support and put everyone at ease by stating that I do intend on completing this project. Granted I only have a few hours each night, but, I'll do it. Thank you to Garth for mailing in my next screen! That was a nice surprise! The Sneek Peek you've been waiting for... Garth has also volunteered to render me some wizard images. He was a bit shocked when I asked for them in 2D animated strips. He said I was going backward (and that I have really bad Game Development Lingo). But, I advised Garth that I will use 3D images later...for now...2D images. Garth also had some great game design ideas to add on. I advised him that I will save his ideas and include them in the future design document (for the 3d version?). Lastly, my best friend Alek, is going to join me in developing this game. This was a project we started eons ago but never completed because of our careers. Now, priorities change and we have some spare time on our hands. Let's see what trouble Alek and I can get into! Tonight I'll be uploading my prototypes and base game to a source control for both of us to work off of. D.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • Need database selection advise

    - by jacknad
    I know this is considered a bad question since there is no correct answer, but I need to decide on a database for embedded linux (DaVinci 368 based) hardware and I've never had to produce a design with a database before. Each record will probably contain less than 1000 images with associated alpha-numeric data and the mass storage will be some kind of flash drive. Only one user needs access to the data at a time. MySQL claims to be "The world's most popular open source database" but SQLite claims to be "the most widely deployed SQL database engine in the world." Perhaps there is another that is also the best in the world? Which is easiest to use for a database newbie? Should I just flip a coin? Does it really matter which one I pick? Do I even need to use a database software package or should I roll my own? I won't need bells and whistles like sorting, but I'll probably need to delete the oldest records to make room for new ones if the storage fills up.

    Read the article

  • What kind of code would Kent Beck avoid unit testing?

    - by tieTYT
    I've been watching a few of the Is TDD Dead? talks on youtube, and one of the things that surprised me is Kent Beck seems to acknowledge that there are just some kinds of programs that aren't worth unit testing. For example, right here DHH says that Kent Beck is ... very happy to say "Well, TDD doesn't fit in this case, I'm just going to bail" It's frustrating to me that Kent Beck seems to acknowledge this, but nobody asks him to elaborate on it or give concrete examples. I'd like to know the situations where Kent Beck thinks TDD is a bad fit. Nobody can read his mind or speak for him, but I'm hoping he's been transparent enough through his books/tweets/whatever for someone to be able to answer. I'm not necessarily going to take what he says as gospel, but it would be useful to know that the times I've tried TDD and it just felt impossible/useless are situations that he would have bailed on it himself. Or, if it turned out he would have tested that code it'd suggest to me that I was approaching the process very wrong. I also think it would be enlightening to understand why he would bail on such projects. My opinion on why this is not a duplicate of "When is it appropriate to not unit test?" After skimming those answers I'm not satisfied. For example, look at UncleBob's answer. He doesn't even acknowledge that such a situation exists. I really think there's value in understanding Kent Beck's position, not just a general, "What's your opinion?" type of question. After all, he's the father of TDD.

    Read the article

  • How&rsquo;s your Momma an&rsquo; them?

    - by Bill Jones Jr.
    When a Southern “boy” like me sees somebody that used to be, or should be, a close friend or relative that they haven’t seen in a long time, that’s a typical greeting.  Come to think of it, we were often related to close friends. So “back in the day”, we not only knew people but everybody close to them.  When I started driving, my Dad told me to always drive carefully in Polk county.  He said if I ran into anybody there, it was likely they would be related or close family friends. Not so much any more… the cities have gotten bigger and more people come south and stay.  One of the curses of air conditioning I guess. Anyway, it’s been a while.  So “How’s your Momma and them”?  Have you been waiting for me to blog again?  Too bad, I’m back anyway <smile>. Here in Charlotte we just had another great code camp.  The Enterprise Developers Guild is going strong, thanks to the help of a lot of dedicated people.  Mark Wilson, Brian Gough, Syl Walker, Ghayth Hilal, Alberto Botero, Dan Thyer, Jean Doiron, Matt Duffield all come to mind.  Plus all the regulars who volunteer for every special event we have. Brian Gough put on a successful SharePoint Saturday.  Rafael Salas and our friends at the local Pass SQL group had a great SQL Saturday.  Brian Hitney and Glen Gordon keep on doing their usual great job for developers in the southeast as our local Microsoft reps. Since my last post, I have the honor of being designated the INetA Membership Mentor for Georgia in addition to mentoring the groups in the Carolinas for the past several years.  Georgia could be a really good thing since my wife likes shopping in Atlanta, not to mention how much we both like Georgia in general.  As I recall, my Momma had people in Georgia.  Wonder how their “Mommas an’ them” are doing?   Bill J

    Read the article

  • What is the best practice for when to check if something needs to be done?

    - by changokun
    Let's say I have a function that does x. I pass it a variable, and if the variable is not null, it does some action. And I have an array of variables and I'm going to run this function on each one. Inside the function, it seems like a good practice is to check if the argument is null before proceeding. A null argument is not an error, it just causes an early return. I could loop through the array and pass each value to the function, and the function will work great. Is there any value to checking if the var is null and only calling the function if it is not null during the loop? This doubles up on the checking for null, but: Is there any gained value? Is there any gain on not calling a function? Any readability gain on the loop in the parent code? For the sake of my question, let's assume that checking for null will always be the case. I can see how checking for some object property might change over time, which makes the first check a bad idea. Pseudo code example: for(thing in array) { x(thing) } Versus: for(thing in array) { if(thing not null) x(thing) } If there are language-specific concerns, I'm a web developer working in PHP and JavaScript.

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • Would this be viewed poorly amongst the programming community?

    - by Eric P
    So one of my responsibilities at work is to build an internal tool that helps the workers enter in all their information. It's an enterprise application that is similar to a Windows forms database tool. So it's not much different than like developing a Word + Excel combo application, but the average person in this workgroup is a 20-40 year old woman or a random chatty male type. Plus I know all of these people are heavily involved with Facebook on a daily basis. How bad would it be if I styled my new interface to be similar to what Facebook does. People could get award points and stuff when they fill out different types of forms and basically compete against each other like it was a game. When people had completed one, it would be posted on their wall and everyone could comment/like stuff just like in Facebook. And it would be like they are doing peer reviewing for fun. The rewards would be outstanding I would imagine. These people are so into Facebook and Facebook games that productivity would rise due to them trying to compete and earn points and achievements. Would this be taking advantage of the people by 'tricking them into working harder by giving them a game' or would it be viewed as something that would improve happiness at work?

    Read the article

  • 12.10 visual performance using nvidia driver

    - by user100485
    My fresh ubuntu 12.10 install is slow, not something extreme but dragging windows, switching workspaces and things like that are just slow and look horrible. it feels like the fps is dropping in a game. Doing some photoshop work in windows was even a relief! This effect gets worse if I connect my external monitor. My system is an intel pentium dual core T4500 with 4gb memory and a GeForce 8200M G/integrated/SSE2 graphics chip. Nothing fancy but should be able to run ok. My "experience" in ubuntu is set to standard. (MSI cr500 laptop) I've installed the nvidia drivers, tried current and experimental and the experimental drivers seem to perform a bit better but overall bad anyway. I set the mode to adaptive in the nvidia-settings tool and it goes to maximum setting directly and doesn't come back. Using htop I found out that compiz or the X server always use a few percent of my cpu, more than I think it should and the time consumed is 5:18 for compiz, 4:33 for /usr/bin/X and 2:41 for google chrome(about 30 tabs open so not too strange I think.) What can I do to increase the visual performance cause this makes me not want to use ubuntu in public!

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Best algorithm for recursive adjacent tiles?

    - by OhMrBigshot
    In my game I have a set of tiles placed in a 2D array marked by their Xs and Zs ([1,1],[1,2], etc). Now, I want a sort of "Paint Bucket" mechanism: Selecting a tile will destroy all adjacent tiles until a condition stops it, let's say, if it hits an object with hasFlag. Here's what I have so far, I'm sure it's pretty bad, it also freezes everything sometimes: void destroyAdjacentTiles(int x, int z) { int GridSize = Cubes.GetLength(0); int minX = x == 0 ? x : x-1; int maxX = x == GridSize - 1 ? x : x+1; int minZ = z == 0 ? z : z-1; int maxZ = z == GridSize - 1 ? z : z+1; Debug.Log(string.Format("Cube: {0}, {1}; X {2}-{3}; Z {4}-{5}", x, z, minX, maxX, minZ, maxZ)); for (int curX = minX; curX <= maxX; curX++) { for (int curZ = minZ; curZ <= maxZ; curZ++) { if (Cubes[curX, curZ] != Cubes[x, z]) { Debug.Log(string.Format(" Checking: {0}, {1}", curX, curZ)); if (Cubes[curX,curZ] && Cubes[curX,curZ].GetComponent<CubeBehavior>().hasFlag) { Destroy(Cubes[curX,curZ]); destroyAdjacentTiles(curX, curZ); } } } } }

    Read the article

  • Algorithmically generating neon layers on pixel grid

    - by user190929
    In an attempt at a screensaver I am making, I am a fan of neo-like graphics, which, of course, look great against a black background. As I understand it, neon, graphically speaking, is essentially a gradient of a color, brightest in the center, and gets darker proceeding outward. Although, more accurate is similar, but separating it into tubes and glow. The tubes are mostly white, while the glow is where most of the color is seen. Well... the tubes could also be a light variant of the color, you could say. The glow is darker. Anyhow, my question is, how could you generate such things given an initial pattern of pixels that would be the tubes? For example, let's say I want to make a neon 'H'. I, via the libraries, can attain the rectangles of pixels which represent it, but I want to make it look neonized. How could I algorithmically achieve such an effect given a base tube shape and base color? EDIT: ok, I mistated that. Got a bit distracted. My purpose for this was similar to a neon effect, but not. Sorry about that. What I am looking for is something like this: Start with a pattern of pixels: [!][!][!][!][!][!][!][!] [!][!][O][!][!][!][!][!] [!][!][O][O][!][!][!][!] [!][!][!][!][O][!][!][!] [!][!][!][!][!][!][!][!] How to I find the U pixels? [!][E][E][E][!][!][!][!] [!][E][O][E][E][!][!][!] [!][E][O][O][E][E][!][!] [!][E][E][E][O][E][!][!] [!][!][!][E][E][E][!][!] Sorry if that looks bad.

    Read the article

  • How can I hard reset a USB device?

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on superuser.com as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • WPF Applications &ndash; Handling the Unhandled

    - by David Totzke
    Instead of just letting your application crash, you can attach a method to the DispatcherUnhandledExceptionEventHandler and one to the AppDomain.Current.UnhandledException.  You wire these up in the code behind of your application which by default is App.xaml.cs.  You can log these errors or throw up a message Don Box and tell the user what happened.  Then you shut down the app gracefully.  You shut it down because something bad happened that you weren’t expecting and at this point there is no guarantee as to the state of the stack or memory or anything really.  All bets are off. If, on the other hand, the method for the UnhandledException is empty and the method for the DispatcherUnhandledEventHandler ends up in a call to a method called LogError() and the LogError() method is FUCKING EMPTY, and you just swallow the exceptions and keep on running, then, not so much.  I spent nearly a day trying to track down a bug that would have been obvious had something been logged or if it just crashed.  It’s my own fault I suppose.  I knew these were hooked up.  I just never suspected that there wouldn’t be any implementation at all.  Live and learn. Customs Man at Heathrow: Anything to declare, Sir? Jekyll and Hyde: Man has not evolved an inch from the slime that spawned him. Customs Man at Heathrow: Very Good, Sir. I tend to agree. Dave Just because I can…

    Read the article

  • Empirical evidence for choice of programming paradigm to address a problem

    - by Graham Lee
    The C2 wiki has a discussion of Empirical Evidence for Object-Oriented Programming that basically concludes there is none beyond appeal to authority. This was last edited in 2008. Discussion here seems to bear this out: questions on whether OO is outdated, when functional programming is a bad choice and the advantages and disadvantages of AOP are all answered with contributors' opinions without reliance on evidence. Of course, opinions of established and reputed practitioners are welcome and valuable things to have, but they're more plausible when they're consistent with experimental data. Does this evidence exist? Is evidence-based software engineering a thing? Specifically, if I have a particular problem P that I want to solve by writing software, does there exist a body of knowledge, studies and research that would let me see how the outcome of solving problems like P has depended on the choice of programming paradigm? I know that which paradigm comes out as "the right answer" can depend on what metrics a particular study pays attention to, on what conditions the study holds constant or varies, and doubtless on other factors too. That doesn't affect my desire to find this information and critically appraise it. It becomes clear that some people think I'm looking for a "turn the crank" solution - some sausage machine into which I put information about my problem and out of which comes a word like "functional" or "structured". This is not my intention. What I'm looking for is research into how - with a lot of caveats and assumptions that I'm not going into here but good literature on the matter would - certain properties of software vary depending on the problem and the choice of paradigm. In other words: some people say "OO gives better flexibility" or "functional programs have fewer bugs" - (part of) what I'm asking for is the evidence of this. The rest is asking for evidence against this, or the assumptions under which these statements are true, or evidence showing that these considerations aren't important. There are plenty of opinions on why one paradigm is better than another; is there anything objective behind any of these?

    Read the article

  • Using HTML5 Today part 2&ndash;Fixing Semantic tags with a Shiv

    - by Steve Albers
    Semantic elements and the Shiv! This is the second entry in the series of demos from the “Using HTML5 Today” talk. For the definitive discussion on unknown elements and the HTML5 Shiv check out Mark Pilgrim’s Dive Into HTML5 online book at http://diveintohtml5.info/semantics.html#unknown-elements Semantic tags increase the meaning and maintainability of your markup, help make your page more computer-readable, and can even provide opportunities for libraries that are written to automagically enhance content using standard tags like <nav>, <header>,  or <footer>. Legacy IE issues However, new HTML5 tags get mangled in IE browsers prior to version 9.  To see this in action, consider this bit of HTML code which includes the new <article> and <header> elements: Viewing this page using the IE9 developer tools (F12) we see that the browser correctly models the hierarchy of tags listed above: But if we switch to IE8 Browser Mode in developer tools things go bad: Did you know that a closing tag could close itself?? The browser loses the hierarchy & closes all of the new tags.  The new tags become unusable and the page structure falls apart. Additionally block-level elements lose their block status, appearing as inline.    The Fix (good) The block-level issue can be resolved by using CSS styling.  Below we set the article, header, and footer tags as block tags. article, header, footer {display:block;} You can avoid the unknown element issue by creating a version of the element in JavaScript before the actual HTML5 tag appears on the page: <script> document.createElement("article"); document.createElement("header"); document.createElement("footer"); </script> The Fix (better) Rather than adding your own JS you can take advantage of a standard JS library such as Remy Sharp’s HTML5 Shiv at http://code.google.com/p/html5shiv/.  By default the Modernizr library includes HTML5 Shiv, so you don’t need to include the shiv code separately if you are using Modernizr.

    Read the article

  • What is a legal way to use music from registered authors in a game?

    - by mm24
    I have recently asked a question about music in games like Guitar Hero. I have found that that in Europe (at least) if I do want to use a track composed by a musician member of a royalty collecting society I need to pay a flat fee to the society and not only to the member. So a "one-to-one" agreement is not valid and the society can come up to me and ask me for money for each download. Even if for FREE! This is a fee sheet list of the UK agency: for fee, see "Permanent download services" It is about 1,200 GBP for less than 22,000 copies and they DON'T specify anything more and they said me on the phone that I need to wait and see how many downloads I get before knowing the price. This is kind of crazy as If I give away the App for free I will have to PAY 1,200 GBP!! I am shocked and I feel very bad. One agency suggested me to use a fake name of the artist, but in this way is not fair to my collaborators as what they hope is that the App gets lots of downloads and in this way that other people will get to know about them and hopefully commission them more work. The other solution is to work only with non registered musicians. The question here to you is: Has anyone found a legal way to use music from registered authors in a game?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >