Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 230/467 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • Unsure about TRIM enabled on my SSD

    - by user84750
    I have a SSD OCZ Vertex4 installed on my laptop. I'm running Ubuntu 12.04 LTS. I have enable TRIM by adding "discard" to my fstab file. (also added option noatime). I rebooted my Ubuntu and followed These instructions here to test TRIM. The end results of my tempfile was all ffff's, when it should have read all zero's, which is telling me TRIM is not really working or enabled correctly. Did I miss something? Also, will it be a problem if only my /home directory is encrypted. AND if you ask why I have swap on my SSD, it's because I let Ubuntu set up my partition. When I have my SSD, I just wanted to install Ubuntu as fast as possible. =) I've done testing to see at which point it will start to use swap and it took a lot of applications open to finally use swap. I currently have 4 GB of memory. I might shrink this to like 512 MB or 1 GB the most. Here's some info about my file system setup. sudo hdparm -I /dev/sda1 | grep "TRIM supported" Data Set Management TRIM supported (limit 16 blocks) sudo fdisk -l /dev/sda Device Boot Start End Blocks Id System /dev/sda1 * 2048 242016255 121007104 83 Linux /dev/sda2 242018302 250068991 4025345 5 Extended /dev/sda5 242018304 250068991 4025344 82 Linux swap / Solaris ls /dev/mapper control cryptswap1

    Read the article

  • What is a Coding Dojo?

    - by huwyss
    Recently i found out that there is a thing called "coding dojo". The point behind it is that software developers want to have a space to learn new stuff like processes, methods, coding details, languages, and whatnot in an environment without stress. Just for fun. No competition. No results required. No deadlines.Some days ago I joined the Zurich coding dojo. We were three programmers with different backgrounds.We gave ourselves the task to develop a method that takes an input value and returns its prime factors. We did pair programming and every few minutes we switched positions. We used test driven development. The chosen programming language was Ruby.I haven't really done TDD before. It was pretty interesting to see the algorithm develop following the testcases.We started with the first test input=1 then developed the most simple productive program that passed this very first test. Then we added the next test input=2 and implemented the productive code. We kept adding tests and made sure all tests are passed until we had the general solution.When we improved the performance of our code we saw the value of the tests we wrote before. Of course our first performance improvement broke several tests.It was a very interesting experience to see how other developers think and how they work. I will participate at the dojo again and can warmly recommend it to anyone. There are  coding dojos all over the world.Have fun!

    Read the article

  • Would form keys reduce the amount of spam we receive?

    - by David Wilkins
    I work for a company that has an online store, and we constantly have to deal with a lot of spam product reviews, and bogus customer accounts. These are all created by automated systems and are more of a nuisance than anything. What I am thinking of (in lieu of captcha, which can be broken) is adding a sort of form key solution to all relevant forms. I know for certain some of the spammers are using XRumer, and I know they seldom request a page before sending us the form data (Is this the definition of CSRF?) so I would think that tying a key to each requested form would at least stem the tide. I also know the spammers are lazy and don't check their work, or they would see that we have never posted a spam review, and they have never gained any revenue from our site. Would this succeed in significantly reducing the volume of spam product reviews and customer account creations we are seeing? EDIT: To clarify what I mean by "Form Keys": I am referring to creating a unique identifier (or "key") that will be used as an invisible, static form field. This key will also be stored either in the database (relative to the user session) or in a cookie variable. When the form's target gets a request, the key must be validated for the form's data to be processed. Those pesky bots won't have the key because they don't load the javascript that generates the form (they just send a blind request to the target) and even if they did load the javascript once, they'd only have one valid key, and I'm not sure they even use cookies.

    Read the article

  • game play strategy in an arena

    - by joulesm
    I am writing a player's behavior for an arena game, and I'm wondering if you can offer some strategies. I'm writing it in Python, but I'm just interested in the high level game play. Here are the game aspects: Arena is a circle of a given size. The arena size shrinks every round to help break ties. Players are much smaller circles, can be on teams of 1 or 2 players. Players attack by colliding with other players, and based on the physics of the collision (speed of both players, angle), one could force another player out of the arena. Once a player is out of the arena, they are out of the game (for that round). The goal is to be the only team with players left in the arena. All other players have been pushed (through collisions or mistakes) out of the arena. It is possible for there to be no winner if the last two players exit the arena at the same time. Once the player has been programmed, the game just runs. There is no human intervention in the game. I'm thinking it's easiest to implement a few simple programmatic rules for my player to follow. For example, stay close to center of the arena, attack opponents from the inner side of the arena, etc. Are there any good simple game strategies? Would adding a random aspect to the game help? For example, to avoid predictability by the other team or something. Thanks in advance.

    Read the article

  • Save Actions in NetBeans IDE 7.3

    - by Geertjan
    Several developers, especially those familiar with equivalent functionality in Eclipse, have been asking for so-called "Save Actions", that is, support for actions that are automatically performed when a file is saved. Here's the related NetBeans issue: http://netbeans.org/bugzilla/show_bug.cgi?id=140719   In NetBeans IDE 7.3, the issue is resolved as follows: A new "On Save" tab is found in the "Editor" tab of the Options window. Defaults for all languages are set via the "All Languages" item in the drop-down. Here, for all languages, you can specify what kind (all, none, or only modified lines) of formatting and space removal will occur automatically when a file is saved: Via the drop-down, you see all the languages supported by the IDE: You can pick a language and then override the default On Save settings: Per language, there may be additional On Save settings. For example, for Java, you can specify that, when saving a Java file, unused import statements should be removed and/or the rules you've set for organizing import statements should be applied: There's also a set of new NetBeans IDE APIs for adding new On Save functionality via custom plugins. Via MIME type registration of OnSaveTask.Factory, you can register new On Save actions that will be run for files conforming to the relevant MIME type. There's also extensions via the Editor Options API for registering new panels (one per language) to the On Save panel in the Options window. I'll demonstrate some examples of the APIs in upcoming blog entries.

    Read the article

  • USB mouse pointer only moving horizontally on macbook 6.2 with 12.04

    - by Glyn Normington
    After installing Ubuntu 12.04 on a macbook pro 6.2, the touchpad and external USB mouse worked perfectly. After rebooting I can't get either touchpad or external USB mouse to work. Sometimes no mouse pointer is visible, but more often I can only move the mouse pointer horizontally five sixths of the way across the display (from the top left). I have uninstalled mouseemu. xinput list shows the USB mouse. xinput query-state for the USB mouse shows the following: ButtonClass button[1]=up ... button[16]=up ValuatorClass Mode=Relative Proximity=In valuator[0]=480 valuator[1]=2400 valuator[2]=0 valuator[3]=3 and re-issuing this command with the pointer at its right hand extreme displays the same except for: valuator[0]=1679 So the valuator[0] seems to be the x-coordinate of the pointer and the range of motion 480-1679 is indeed about five sixths of the display width (1440). valuator[1] is suspiciously large given the display height is 900. Perhaps this is a side-effect of having previously been using a dual monitor (although booting with that monitor connected does not help). There are other entries listed under xinput list: Virtual core XTEST pointer which seems stuck at position (840,1050). bcm5974 which seems stuck at position (837,6700). Removing the bcm5974 module using rmmod disables the toucpad as expected but does not fix the USB mouse problem. After adding the module back, it is stuck at position (840,1050) instead of (837,6700). /etc/X11/xorg.conf was generated by nvidia-settings and contains: Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "ZAxisMapping" "4 5" although I don't know how plausible these settings are. Any suggestions?

    Read the article

  • Intermittent sound on an Medion Akoya S5610

    - by ej159
    The sound on my machine (Medion Akoya S5610) works intermittently. If I reboot enough times I do get sound. This happened before I upgraded, when running Oneiric too. I have fiddled around with alsa-base.conf, putting in different values for model in options snd-hda-intel model=but still the issue persists (although I get the impression that I am more like to have sound on the next reboot if I have edited that file although I can't be sure of this). Adding index=0 does not help the situation either. I have been thinking that this problem could be related some how to the order that driver modules are loaded. The snd-hda-intel module is also used for the sound card (ALC888) in my graphics card. Could it be that these are some how competing? If so, how do I add a preference when they are using the same module? This is the result of lspci -nn | grep Audio (when sound was not working): 00:1b.0 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03) 01:00.1 Audio device [0403]: Advanced Micro Devices [AMD] nee ATI RV620 HDMI Audio [Radeon HD 3400 Series] [1002:aa28] I've been wrestling with this problem for ages and ages and have spent days looking for answers on forums but to no avail so I would appreciate any help you can give. Many thanks

    Read the article

  • Skin Object Tokens for DotNetNuke 5 - 8 Videos

    In this tutorial we demonstrate how to use Skin Object Tokens in DotNetNuke v5 and above. Skin Object tokens are a new skinning method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control, it covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use etc. This new Object token method has been introduced into DotNetNuke with the idea of making it simpler to add a skin object into a DotNetNuke skin. The videos contain: Video 1 - Introduction to HTML Object Token Skinning Video 2 - Basic Styling of a Skin and Creating Multiple Content Panes Video 3 - Styling, Control Panel, Login and Register Skin Object Tokens Video 4 - Packaging, Installing, Testing and Viewing the ASCX Version of the Skin Video 5 - Viewing the Attributes for Skin Object Tokens, Logo Token, Search Token Video 6 - Breadcrumb Token, Text Token and Localization, Links Token Video 7 - More Skin Tokens and Token Replacement Video 8 - Demonstration of the Object Tokens and Bug Fixing Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Unit testing to prove balanced tree

    - by Darrel Hoffman
    I've just built a self-balancing tree (red-black) in Java (language should be irrelevant for this question though), and I'm trying to come up with a good means of testing that it's properly balanced. I've tested all the basic tree operations, but I can't think of a way to test that it is indeed well and truly balanced. I've tried inserting a large dictionary of words, both pre-sorted and un-sorted. With a balanced tree, those should take roughly the same amount of time, but an unbalanced tree would take significantly longer on the already-sorted list. But I don't know how to go about testing for that in any reasonable, reproducible way. (I've tried doing millisecond tests on these, but there's no noticeable difference - probably because my source data is too small.) Is there a better way to be sure that the tree is really balanced? Say, by looking at the tree after it's created and seeing how deep it goes? (That is, without modifying the tree itself by adding a depth field to each node, which is just wasteful if you don't need it for anything other than testing.)

    Read the article

  • How do you track existing requirements over time?

    - by CaptainAwesomePants
    I'm a software engineer working on a complex, ongoing website. It has a lot of moving parts and a small team of UI designers and business folks adding new features and tweaking old ones. Over the last year or so, we've added hundreds of interesting little edge cases. Planning, implementing, and testing them is not a problem. The problem comes later, when we want to refactor or add another new feature. Nobody remembers half of the old features and edge cases from a year ago. When we want to add a new change, we notice that code does all sorts of things in there, and we're not entirely sure which things are intentional requirements and which are meaningless side effects. Did someone last year request that the login token was supposed to only be valid for 30 minutes, or did some programmers just pick a sensible default? Can we change it? Back when the product was first envisioned, we created some documentation describing how the site worked. Since then we created a few additional documents describing new features, but nobody ever goes back and updates those documents when new features are requested, so the only authoritative documentation is the code itself. But the code provides no justification, no reason for its actions: only the how, never the why. What do other long-running teams do to keep track of what the requirements were and why?

    Read the article

  • SharePoint: UI for ordering list items

    - by svdoever
    SharePoint list items have in the the base Item template a field named Order. This field is not shown by default. SharePoint 2007, 2010 and 2013 have a possibility to specify the order in a UI, using the _layouts page: {SiteUrl}/_layouts/Reorder.aspx?List={ListId} In SharePoint 2010 and 2013 it is possible to add a custom action to a list. It is possible to add a custom action to order list items as follows (SharePoint 2010 description): Open SharePoint Designer 2010 Navigate to a list Select Custom Actions > List Item Menu Fill in the dialog box: Open List Settings > Advanced Settings > Content Types, and set Allow management of content types to No  On List Settings select Column Ordering This results in the following UI in the browser: Selecting the custom Order Items action (under List Tools > Items) results in: You can change your custom action in SharePoint designer. On the list screen in the bottom right corner you can find the custom action: We now need to modify the view to include the order by adding the Order field to the view, and sorting on the Order field. Problem is that the Order field is hidden. It is possible to modify the schema of the Order field to set the Hidden attribute to FALSE. If we don’t want to write code to do it and still get the required result it is also possible to modify the view through SharePoint Designer: Modify the code of the view: This results in: Note that if you change the view through the web UI these changes are partly lost. If this is a problem modify the Order field schema for the list.

    Read the article

  • How to prevent one account from unlocking products on other devices using Apple StoreKit?

    - by reapz
    We are currently wrapping up a free-to-play game on iOS in which you can purchase non-consumable products. We have been discussing this case internally and are not quite sure what the best practices are as this is our first title. For example, if a user downloads our app, and makes some purchases. These can be restored should the app ever be deleted and reinstalled as long as the user uses the same Apple ID. What is to stop him from making a fake Apple account, purchasing items and then posting this account on the web allowing everyone to get the items for free? That is obviously a worst case situation. But a smaller case would be a user unlocking items for his friends. We do not want this to be an always online game but have considered doing a check on startup if there is internet available. If the currently logged in account doesn't own the products do we lock them again? Probably not because people may simply sign into the device with different Game Center logins at which point we don't want to constantly lock and unlock items. At some point we will be adding multiplayer at which point we can definately do a check with the currently logged in account. This is because A, they will be online when attempting multiplayer, and B, they will want to use their own account for multiplayer. Unfortunately we aren't quite ready for this yet. Has anyone tackled this issue. Are we overthinking here?

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to install Gyachi on ubuntu 12.10 [Solved]

    - by Oguz Can Sertel
    ok ... there is no way to install it on ubuntu 12.10 I would like to use Gyachi on ubuntu 12.10. I tried these steps but it doesn't work.. I wanted to compile it myself... but it need some libs... it made me confused... so I gave up sudo add-apt-repository ppa:adilson/experimental sudo apt-get update sudo apt-get install gyachi Thank you for your helps at first command the output: sudo add-apt-repository ppa:adilson/experimental You are about to add the following PPA to your system: Contains packages that are not in the official Debian/Ubuntu repositories and newer versions and snapshots which are not available yet in the repositories. Theses packages are experimental. Use them at your own risk. More info: https://launchpad.net/~adilson/+archive/experimental Press [ENTER] to continue or ctrl-c to cancel adding it gpg: keyring `/tmp/tmp3y3i7p/secring.gpg' created gpg: keyring `/tmp/tmp3y3i7p/pubring.gpg' created gpg: requesting key 27B81625 from hkp server keyserver.ubuntu.com gpg: /tmp/tmp3y3i7p/trustdb.gpg: trustdb created gpg: key 27B81625: public key "Launchpad Experimental Packages PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) OK and after sudo apt-get update; this is (sudo apt-get install gyachi)'s output here is the output: sudo apt-get install gyachi Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package gyachi

    Read the article

  • how to solve ArrayList outOfBoundsExeption?

    - by iQue
    Im getting: 09-02 17:15:39.140: E/AndroidRuntime(533): java.lang.IndexOutOfBoundsException: Invalid index 1, size is 1 09-02 17:15:39.140: E/AndroidRuntime(533): at java.util.ArrayList.throwIndexOutOfBoundsException(ArrayList.java:251) when Im killing enemies using this method: private void checkCollision() { Rect h1 = happy.getBounds(); for (int i = 0; i < enemies.size(); i++) { for (int j = 0; j < bullets.size(); j++) { Rect b1 = bullets.get(j).getBounds(); Rect e1 = enemies.get(i).getBounds(); if (b1.intersect(e1)) { enemies.get(i).damageHP(5); bullets.remove(j); Log.d("TAG", "HERE: LOLTHEYTOUCHED"); } if (h1.intersect(e1)){ happy.damageHP(5); } if(enemies.get(i).getHP() <= 0){ enemies.remove(i); } if(happy.getHP() <= 0){ //end-screen !!!!!!! } } } } using this ArrayList: private ArrayList<Enemy> enemies = new ArrayList<Enemy>(); and adding to array like this: public void createEnemies() { Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.female); if (enemyCounter < 24) { enemies.add(new Enemy(bmp, this, controls)); } enemyCounter++; } I dont really understand what the problem is, Ive been looking around for a while but cant really find anything that helps me. If you know or if you can link me someplace where they have a solution for a similar problem Ill be a very happy camper! Thanks for ur time.

    Read the article

  • Using visitor pattern with large object hierarchy

    - by T. Fabre
    Context I've been using with a hierarchy of objects (an expression tree) a "pseudo" visitor pattern (pseudo, as in it does not use double dispatch) : public interface MyInterface { void Accept(SomeClass operationClass); } public class MyImpl : MyInterface { public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } } This design was, however questionnable, pretty comfortable since the number of implementations of MyInterface is significant (~50 or more) and I didn't need to add extra operations. Each implementation is unique (it's a different expression or operator), and some are composites (ie, operator nodes that will contain other operator/leaf nodes). Traversal is currently performed by calling the Accept operation on the root node of the tree, which in turns calls Accept on each of its child nodes, which in turn... and so on... But the time has come where I need to add a new operation, such as pretty printing : public class MyImpl : MyInterface { // Property does not come from MyInterface public string SomeProperty { get; set; } public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } public void Accept(SomePrettyPrinter printer) { printer.PrettyPrint(this.SomeProperty); } } I basically see two options : Keep the same design, adding a new method for my operation to each derived class, at the expense of maintainibility (not an option, IMHO) Use the "true" Visitor pattern, at the expense of extensibility (not an option, as I expect to have more implementations coming along the way...), with about 50+ overloads of the Visit method, each one matching a specific implementation ? Question Would you recommand using the Visitor pattern ? Is there any other pattern that could help solve this issue ?

    Read the article

  • Does it make sense to write build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Setting up group disk quotas

    - by Ray
    I am hoping to get some advice in setting up disk quotas. So, I know about: Adding usrquota and grpquota on to /etc/fstab for the file systems that need to be managed. Using edquota to assign disk quotas to users. However, I need to do the last step for multiple users and edquota seems to be a bit troublesome. One solution that I have found is that I can do: sudo edquota -u foo -p bar. This will copy the disk quota of bar to user foo. I was wondering if this is the best solution? I tried setting up group disk quotas but they don't seem to be working. Are group quotas meant to help in the assignment of the same quota to multiple users? Or are they suppose to give a total limit to a set of users? For example, if users A, B, C are in group X then assigning a quota of 20 GB gives each user 20 GB or does it give 20 GB to the entire group X to divide up? I'm interested in doing the former, but not the latter. Right now, I've assigned group disk quotas and they aren't working. So, I guess it is due to my misunderstanding of group disk quotas... My problem is I want to easily give the same quota to multiple users; any suggestions on the best way to do this out of what I've tried above or anything else I may not have thought of? Thank you!

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • TraceTune supports uploading Zip files

    - by Bill Graziano
    I’ve been using the online version of ClearTrace more and more lately.  When I get to a new client it’s just much easier to upload a trace file rather than install ClearTrace. That means I’ve finally been adding more features to it.  The two latest features are around ease of use. You can now upload a ZIP file that contains a trace file.  Trace files are already somewhat compressed.  Putting it in a ZIP file further compresses it by a factor of 8X or 9X in my testing. That means you can start with a 100MB trace and end up with a 10Mb-12MB ZIP file to upload.  I’m consistently able to get over 150,000 events in a 100MB ZIP file.  That gives me a pretty good look at a system. The second part of this is that files are now processed asynchronously.  After you upload a file you’ll be taken to a processing page that updates every few seconds with the number of rows processed.  It generally takes under a minute to process a 100MB trace file but I *hated* staring at a blank screen. Give TraceTune a try.  It’s getting easier to use every day.

    Read the article

  • Errors in ~/.xsession-errors

    - by Kuberan Naganathan
    I'm getting errors in ~/.xession-errors. I'm running ubuntu 12.04 Many apps fail to run without mention of problems in the .xsession-errors file. I looked around and tried to resolve issues myself but failed so far. I have to say it's possible that the issue is related to me mounting /home on another partition. (I say possibly because stuff worked ok for a while.) Fortunately my .xsession-errors file is small enough to post here. Thanks in advance for the help: gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to get edid: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: unable to get EDID for xrandr-default: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to reset xrandr-default gamma tables: gamma size is zero Initializing composite options...done Initializing opengl options...done Initializing decor options...done ** Message: applet now removed from the notification area Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/kuberan/.compiz/session/10754cf696d335e98e13471376531156900000024960034" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:2560): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Setting Update "main_menu_key" Setting Update "run_key" Setting Update "icon_size" ** Message: moving back from GtkStatusIcon to indicator

    Read the article

  • Expanding RAID-5

    - by Garry
    I'm new to RAID and trying to get my head around things. I have owned a Drobo in the past (which I liked) but it failed. Here's a hypothetical scenario: Assume I set up a RAID-5 array consisting of four 1TB hot-swappable 2.5" SATA drives. I name this volume 'My Data'. By my calculations, that would give me 2.7TB of usable space and the ability to recover if a single drive fails. I have a few questions: What happens if I pull out a single 1TB drive and replace it with a 2TB drive? Would the array automatically rebuild itself with no issues? Would the maximum capacity remain 2.7TB? If number (1) above is true and the array rebuilds itself with three 1TB drives one 2TB drive what would happen if I then pulled another 1TB drive out and stuck in a 2TB drive (you can see where I'm going here can't you). Would I eventually be able to gain more storage by gradually adding bigger drives? From a practical point of view, how much input is required from me as the end user whilst these drives are being pulled out and put in? On the Drobo, the storage space just automagically handles itself. Would I have to be actively involved in telling Ubuntu what was going on or would any of it be automated? Thanks in advance,

    Read the article

  • Google Analytics Social Tracking implementation. Is Google's example correct?

    - by s_a
    The current Google Analytics help page on Social tracking (developers.google.com/analytics/devguides/collection/gajs/gaTrackingSocial?hl=es-419) links to this page with an example of the implementation: http://analytics-api-samples.googlecode.com/svn/trunk/src/tracking/javascript/v5/social/facebook_js_async.html I've followed the example carefully yet social interactions are not registered. This is the webpage with the non-working setup: http://bit.ly/1dA00dY (obscured domain as per Google's Webmaster Central recommendations for their product forums) This is the structure of the page: In the : ga async code copied from the analytics' page a script tag linking to stored in the same domain. the twitter js loading tag In the the fb-root div the facebook async loading js including the _ga.trackFacebook(); call the social buttons afterwards, like so: (with the proper URL) Tweet (with the proper handle) That's it. As far as I can tell, I have implemented it exactly like in the example, but likes and twitts aren't registered. I have also altered the ga_social_tracking.js to register the social interactions as events, adding the code below. It doesn't work either. What could be wrong? Thanks! Code added to ga_social_tracking.js var url = document.URL; var category = 'Social Media'; /* Facebook */ FB.Event.subscribe('edge.create', function(href, widget) { _gaq.push(['_trackEvent', category, 'Facebook', url]); }); /* Twitter */ twttr.events.bind('tweet', function(event) { _gaq.push(['_trackEvent', category, 'Twitter', url]); });

    Read the article

  • PPT Leveraging Azure for Performance Testing

    - by Tarun Arora
    I have recently presented a session on “How you can leverage Azure for Performance Testing” your application.  It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barrier in Load Testing & Performance Testing adoption is the high infrastructure and administration cost that comes with this phase of testing. In the session I tried to demonstrate how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. You can view the session presentation here, http://www.slideshare.net/aroratarun/leveraging-azure-for-performance-testing  I’ll be adding a video on this subject shortly… If you have any feedback or further suggestions to add to the goodness of this solution please get in touch.

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >