Search Results

Search found 5564 results on 223 pages for 'rollover effect'.

Page 112/223 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • In a Tower defense game, how to do buffs/debuffs

    - by Gabe
    The question is at the very bottom. If you understand Buffs/Debuffs in tower defense games then you should skip the bulk of this question and go to the bottom (seperated with the long line) I plan on making an IPhone TD game. The fact that its an iPhone game isn't relevant but I am coding in Objective-c with Cocos2D. I am relatively inexperienced in the field of game design so I'm looking for some advice from someone experienced in this field. In tower defense, there are two things that are relevant to my question: towers/enemies (both have their own classes/children). They each have stats like hp, damage, speed, etc. I want to add buffs/defuffs, for instance: Towers A,B and C each have 15 base damage. Tower D would be a buff tower with no damage, a tower with an AOE(area of effect) aura that gives 10% damage to all towers in range. Tower E might slow enemies in its AOE, a debuff. Stuff like that. The same could go for enemies. Enemy A is a boss that has a slow aura that affects towers and slows their base attack speed or something along those lines. So the question is, what would be the most effective way to implement this? If it was just towers then I would just mess around with the tower classes, but since tower classes and enemy classes are both affected, should I make a buff class? TD games can consume quite a bit of memory with large amounts of creeps and towers, and buffs I feel like would also consume quit a bit... So I'm trying to be as effective as possible.

    Read the article

  • RPi and Java Embedded GPIO: Using Java to read input

    - by hinkmond
    Now that we've learned about using Java code to control the output of the Raspberry Pi GPIO ports (by lighting up LEDs from a Java app on the RPi for now and noting in the future the same Java code can be used to drive industrial automation or medical equipment, etc.), let's move on to learn about reading input from the RPi GPIO using Java code. As before, we need to start out with the necessary hardware. For this exercise we will connect a Static Electricity Detector to the RPi GPIO port and read the value of that sensor using Java code. The circuit we'll use is from William J. Beaty and is described at this Web link. See: Static Electricity Detector He calls it an "Electric Charge" detector, which is a bit misleading. A Field Effect Transistor is subject to nearby electro-magnetic fields, such as a static charge on a nearby object, not really an electric charge. So, this sensor will detect static electricity (or ghosts if you are into paranormal activity ). Take a look at the circuit and in the next blog posts we'll step through how to connect it to the GPIO port of your RPi and then how to write Java code to access this fun sensor. Hinkmond

    Read the article

  • If You Include the Groovy Editor...

    - by Geertjan
    ...in a NetBeans RCP application, what additional JARs will you need to include for the Groovy Editor to work? Leaving aside the debate on the current state & quality of the NetBeans Groovy Editor, so, assuming you need the Groovy support that the NetBeans Groovy Editor provides, you would check the Groovy Editor checkbox in the Project Properties dialog of your application: As you can see, however, the Groovy Editor depends on other modules, some of which, in turn, depend on yet other modules, and so on. So, I clicked the "Resolve" button above and then created a ZIP distribution, to see which additional JARs had been included. Until that point, I had only been using the "platform" cluster, which means that absolutely everything found in the ZIP's "ide" cluster and "java" cluster have only been included so that the Groovy Editor could be included, i.e., all thanks to clicking the "Resolve" button above. Let's first look at what that means for the "java" cluster: That's not so bad and kind of a side effect of Groovy being Java, i.e., a lot of Java functionality is needed. Now let's look at the "ide" cluster: So, in answer to the original question, if all you want in your NetBeans Platform application, in terms of editor functionality, is the Groovy Editor, then you have a pretty high price to pay. At the very least, I would have assumed that the project support JARs and the debugger support JARs would not be so tightly coupled with the Groovy Editor. That would be a cool thing to separate out from the editor support.

    Read the article

  • isometric drawing order with larger than single tile images - drawing order algorithm?

    - by Roger Smith
    I have an isometric map over which I place various images. Most images will fit over a single tile, but some images are slightly larger. For example, I have a bed of size 2x3 tiles. This creates a problem when drawing my objects to the screen as I get some tiles erroneously overlapping other tiles. The two solutions that I know of are either splitting the image into 1x1 tile segments or implementing my own draw order algorithm, for example by assigning each image a number. The image with number 1 is drawn first, then 2, 3 etc. Does anyone have advice on what I should do? It seems to me like splitting an isometric image is very non obvious. How do you decide which parts of the image are 'in' a particular tile? I can't afford to split up all of my images manually either. The draw order algorithm seems like a nicer choice but I am not sure if it's going to be easy to implement. I can't solve, in my head, how to deal with situations whereby you change the index of one image, which causes a knock on effect to many other images. If anyone has an resources/tutorials on this I would be most grateful.

    Read the article

  • Keyboard freezes / stuck if a key pressed repeatedly

    - by Aziz Rahmad
    I use Acer 4530. This problem has happened long since Ubuntu 10.10 and now that I use 11.04 dual booted with Linux Mint 10. Everytime I press one key repeatedly, like when I read a long article in a website/ebook or when I play games which required me to press arrow keys repeatedly, it would randomly freeze. That is, whatever I press on keyboard has no effect, and that also happens with touchpad. However, the USB mouse works just fine. I later found out that it's not actually freeze but more like it's like the key stuck. For example when I play tetris which I usually press w (down) button repeatedly, after some times it would freeze. And if I put the cursor in, say, browser's address bar, it would type "wwww....." infinitely. The only way I could fix it is by suspend the laptop, either by using mouse or by closing the lid. And instead of suspended, in that case the laptop would automatically wake up and everything will be fine. (Usually my laptop would wake up after suspended by pressing any key) It has happened since the first time I use Ubuntu, 10.10, and it also happens in Linux Mint 10, and until now in Ubuntu 11.04. It never happened when I use Windows, though. Anyone has ever encounter similar problem? Anyone know how to fix it permanently?

    Read the article

  • how do I remove the last connected users from the lightdm greeter list

    - by Christophe Drevet
    With gdm3, I was able to remove the last connected users from the list by removing the file '/var/log/ConsoleKit/history' With lightdm, the last users appears even when : removing /var/log/ConsoleKit/history removing /var/lib/lightdm/.cache/unity-greeter/state Where does lightdm store this list ? Edit: It seems like it's using the content from the last command. Then purging the content of the file /var/log/wtmp is sufficient to remove any previously connected user from the list : # > /var/log/wtmp But, after doing this, I have the unwanted side effect that users loging in via lightdm doesn't appears at all in this list. I must say that I'm in a enterprise network environment using NIS. Edit2: Well, it seems that lightdm uses wtmp to display recent network users list, but does not update it. So, lightdm will show a network user only if it logged in in another fashion (ssh, login), like I did on this computer before. cf: https://bugs.launchpad.net/lightdm/+bug/871070 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=648604 Edit3: I just added the following line to the file /etc/pam.d/lightdm To force lightdm to store users in wtmp : session optional pam_lastlog.so silent

    Read the article

  • How do I stop my ethernet network connection from dropping?

    - by Sean Hill
    My ethernet-based network connection doesn't stay up consistently. I'm running a ping against the gateway and it will: Work for a minute Freeze, time out, or give multi-second response times Repeat If it's stuck and I disable/enable networking through the network manager applet everything will work fine again for a minute. After 280 packets transmitted I'm getting 41% packet loss. I've tried a different cable and connection to the gateway but this had no effect. The distance to the gateway is just about 3 feet. Seems to work fine if I switch over to Windows, but Ubuntu is my main OS and I can't even use it right now as I depend on the network. My setup... OS: Ubuntu 11.04, dual-booting Windows 7 Mobo: Gigabyte Z68X-UD4-B3 CPU: Intel Core i7 2600K Edit A little clarification... Network Manager is still showing me as connected, but I am unable to reach to gateway or anything beyond. At no point does NM suggest the connection is lost and calling ifconfig shows that I still have an IP address. I tried connecting to a different gateway with a different cable and the same problem arises. As requested: lspci | grep -i eth 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) dmesg | tail -f [ 14.024709] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0 [ 14.026443] EXT4-fs (sda7): re-mounted. Opts: commit=0 [ 14.176101] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. [ 23.917731] eth0: no IPv6 routers present [ 726.109697] r8169 0000:07:00.0: eth0: link up [ 733.169494] r8169 0000:07:00.0: eth0: link up [ 753.930119] r8169 0000:07:00.0: eth0: link up [ 880.787332] r8169 0000:07:00.0: eth0: link up [ 1159.161283] r8169 0000:07:00.0: eth0: link up [ 1406.623550] r8169 0000:07:00.0: eth0: link up Edit @roland-taylor: Network is always available under Windows. Pings do not timeout, applications do not complain of no network availability, large downloads are not interrupted or slowed.

    Read the article

  • How do I restore a backup of my keyring (containing ssh key passprases, nautilus remote filesystem passwords and wifi passwords)?

    - by con-f-use
    I changed the disk on my laptop and installed Ubuntu on the new disk. Old disk had 12.04 upgraded to 12.10 on it. Now I want to copy my old keyring with WiFi passwords, ftp passwords for nautilus and ssh key passphrases. I have the whole data from the old disk available (is now a USB disk and I did not delete the old data yet or do anything with it - I could still put it in the laptop and boot from it like nothing happend). The old methods of just copying ~/.gconf/... and ~/.gnome2/keyrings won't work. Did I miss something? 1. Edit: I figure one needs to copy files not located in the users home directory as well. I copied the whole old /home/confus (which is my home directory) to the fresh install to no effect. That whole copy is now reverted to the fresh install's home directory, so my /home/confus is as it was the after fresh install. 2. Edit: The folder /etc/NetworkManager/system-connections seems to be the place for WiFi passwords. Could be that /usr/share/keyrings is important as well for ssh keys - that's the only sensible thing that a search came up with: find /usr/ -name "*keyring* 3. Edit: Still no ssh and ftp passwords from the keyring. What I did: Convert old hard drive to usb drive Put new drive in the laptop and installed fresh version of 12.10 there Booted from old hdd via USB and copied its /etc/NetwrokManager/system-connections, ~/.gconf/ and ~/.gnome2/keyrings, ~/.ssh over to the new disk. Confirmed that all keys on the old install work Booted from new disk Result: No passphrase for ssh keys, no ftp passwords in keyring. At least the WiFi passwords are migrated.

    Read the article

  • WP E Commerce Safe Mode restriction error [on hold]

    - by Mustafa Kamal
    I have my online shop, created with WP Ecommerce getting broken after I moved it to another server. I could be sure that the problem comes from WP Ecommerce because when I disable that plugin. Everything run as expected. This is the exact error message Warning: session_start() [function.session-start]: SAFE MODE Restriction in effect. The script whose uid is 515 is not allowed to access /tmp owned by uid 0 in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 Fatal error: session_start() [<a href='function.session-start'>function.session-start</a>]: Failed to initialize storage module: files (path: ) in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 I've tried to turn off safe mode on my php configuration. nothing happens. the error's still there. I thought it was some kind of permission issue, so I tried to change /tmp permission to 777. Nothing happens. I googled it some more and suspect it might have something to do with fastCGI configuration and stuff. Which I totally don't understand. My googling result mostly suggest me to consult the web hosting provider or even to move to another host. But in this case, I am the owner of the server (VPS with cPanel/WHM). And I don't have any idea how to solve this kind of problem.

    Read the article

  • Dot Matrix printers setup...

    - by Parhs
    Hello! I am using debian which is similar to ubuntu. They have 7 dot matrix printers some very old like this one http://www.omnidatasys.net/product/desc_printer_ti880.htm which works from 1979 daily and at text is faster than many inkjects. I believe that it has his own language... Sending text to serial port (port server) prints garbage. However i think is prints only capital english up to 95 asccii and greek and the rest up to 127 i think greek capital.(special chip ) Sending english capital letters prints garbage i think but i amnt sure... i will try again... The other printer are ESC/P compatible and i use generic epson driver provided from ghostscript... However i think that sending text via lp -dpr1 filename It prints the text as a grafic...Changing from printer font face(courier,times roman etc) or pitch has no effect... I am wondering if is there any work arround for this? In AIX they claim that lp command printed output as text as it prints and cobol programs send raw text to to lp printers . However in AIX they use some custom filters for the printers and has more options for dot matrix printers.. I would like to know if there is a solution for this.. To avoid graphics mode for text and change font face somehow.. The most Straight-through approach would be to use no driver ,just send ESC/P from cobol but this requires too much work... Thank you again!

    Read the article

  • C#/.NET Little Wonders: Skip() and Take()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. I’ve covered many valuable methods from System.Linq class library before, so you already know it’s packed with extension-method goodness.  Today I’d like to cover two small families I’ve neglected to mention before: Skip() and Take().  While these methods seem so simple, they are an easy way to create sub-sequences for IEnumerable<T>, much the way GetRange() creates sub-lists for List<T>. Skip() and SkipWhile() The Skip() family of methods is used to ignore items in a sequence until either a certain number are passed, or until a certain condition becomes false.  This makes the methods great for starting a sequence at a point possibly other than the first item of the original sequence.   The Skip() family of methods contains the following methods (shown below in extension method syntax): Skip(int count) Ignores the specified number of items and returns a sequence starting at the item after the last skipped item (if any).  SkipWhile(Func<T, bool> predicate) Ignores items as long as the predicate returns true and returns a sequence starting with the first item to invalidate the predicate (if any).  SkipWhile(Func<T, int, bool> predicate) Same as above, but passes not only the item itself to the predicate, but also the index of the item.  For example: 1: var list = new[] { 3.14, 2.72, 42.0, 9.9, 13.0, 101.0 }; 2:  3: // sequence contains { 2.72, 42.0, 9.9, 13.0, 101.0 } 4: var afterSecond = list.Skip(1); 5: Console.WriteLine(string.Join(", ", afterSecond)); 6:  7: // sequence contains { 42.0, 9.9, 13.0, 101.0 } 8: var afterFirstDoubleDigit = list.SkipWhile(v => v < 10.0); 9: Console.WriteLine(string.Join(", ", afterFirstDoubleDigit)); Note that the SkipWhile() stops skipping at the first item that returns false and returns from there to the rest of the sequence, even if further items in that sequence also would satisfy the predicate (otherwise, you’d probably be using Where() instead, of course). If you do use the form of SkipWhile() which also passes an index into the predicate, then you should keep in mind that this is the index of the item in the sequence you are calling SkipWhile() from, not the index in the original collection.  That is, consider the following: 1: var list = new[] { 1.0, 1.1, 1.2, 2.2, 2.3, 2.4 }; 2:  3: // Get all items < 10, then 4: var whatAmI = list 5: .Skip(2) 6: .SkipWhile((i, x) => i > x); For this example the result above is 2.4, and not 1.2, 2.2, 2.3, 2.4 as some might expect.  The key is knowing what the index is that’s passed to the predicate in SkipWhile().  In the code above, because Skip(2) skips 1.0 and 1.1, the sequence passed to SkipWhile() begins at 1.2 and thus it considers the “index” of 1.2 to be 0 and not 2.  This same logic applies when using any of the extension methods that have an overload that allows you to pass an index into the delegate, such as SkipWhile(), TakeWhile(), Select(), Where(), etc.  It should also be noted, that it’s fine to Skip() more items than exist in the sequence (an empty sequence is the result), or even to Skip(0) which results in the full sequence.  So why would it ever be useful to return Skip(0) deliberately?  One reason might be to return a List<T> as an immutable sequence.  Consider this class: 1: public class MyClass 2: { 3: private List<int> _myList = new List<int>(); 4:  5: // works on surface, but one can cast back to List<int> and mutate the original... 6: public IEnumerable<int> OneWay 7: { 8: get { return _myList; } 9: } 10:  11: // works, but still has Add() etc which throw at runtime if accidentally called 12: public ReadOnlyCollection<int> AnotherWay 13: { 14: get { return new ReadOnlyCollection<int>(_myList); } 15: } 16:  17: // immutable, can't be cast back to List<int>, doesn't have methods that throw at runtime 18: public IEnumerable<int> YetAnotherWay 19: { 20: get { return _myList.Skip(0); } 21: } 22: } This code snippet shows three (among many) ways to return an internal sequence in varying levels of immutability.  Obviously if you just try to return as IEnumerable<T> without doing anything more, there’s always the danger the caller could cast back to List<T> and mutate your internal structure.  You could also return a ReadOnlyCollection<T>, but this still has the mutating methods, they just throw at runtime when called instead of giving compiler errors.  Finally, you can return the internal list as a sequence using Skip(0) which skips no items and just runs an iterator through the list.  The result is an iterator, which cannot be cast back to List<T>.  Of course, there’s many ways to do this (including just cloning the list, etc.) but the point is it illustrates a potential use of using an explicit Skip(0). Take() and TakeWhile() The Take() and TakeWhile() methods can be though of as somewhat of the inverse of Skip() and SkipWhile().  That is, while Skip() ignores the first X items and returns the rest, Take() returns a sequence of the first X items and ignores the rest.  Since they are somewhat of an inverse of each other, it makes sense that their calling signatures are identical (beyond the method name obviously): Take(int count) Returns a sequence containing up to the specified number of items. Anything after the count is ignored. TakeWhile(Func<T, bool> predicate) Returns a sequence containing items as long as the predicate returns true.  Anything from the point the predicate returns false and beyond is ignored. TakeWhile(Func<T, int, bool> predicate) Same as above, but passes not only the item itself to the predicate, but also the index of the item. So, for example, we could do the following: 1: var list = new[] { 1.0, 1.1, 1.2, 2.2, 2.3, 2.4 }; 2:  3: // sequence contains 1.0 and 1.1 4: var firstTwo = list.Take(2); 5:  6: // sequence contains 1.0, 1.1, 1.2 7: var underTwo = list.TakeWhile(i => i < 2.0); The same considerations for SkipWhile() with index apply to TakeWhile() with index, of course.  Using Skip() and Take() for sub-sequences A few weeks back, I talked about The List<T> Range Methods and showed how they could be used to get a sub-list of a List<T>.  This works well if you’re dealing with List<T>, or don’t mind converting to List<T>.  But if you have a simple IEnumerable<T> sequence and want to get a sub-sequence, you can also use Skip() and Take() to much the same effect: 1: var list = new List<double> { 1.0, 1.1, 1.2, 2.2, 2.3, 2.4 }; 2:  3: // results in List<T> containing { 1.2, 2.2, 2.3 } 4: var subList = list.GetRange(2, 3); 5:  6: // results in sequence containing { 1.2, 2.2, 2.3 } 7: var subSequence = list.Skip(2).Take(3); I say “much the same effect” because there are some differences.  First of all GetRange() will throw if the starting index or the count are greater than the number of items in the list, but Skip() and Take() do not.  Also GetRange() is a method off of List<T>, thus it can use direct indexing to get to the items much more efficiently, whereas Skip() and Take() operate on sequences and may actually have to walk through the items they skip to create the resulting sequence.  So each has their pros and cons.  My general rule of thumb is if I’m already working with a List<T> I’ll use GetRange(), but for any plain IEnumerable<T> sequence I’ll tend to prefer Skip() and Take() instead. Summary The Skip() and Take() families of LINQ extension methods are handy for producing sub-sequences from any IEnumerable<T> sequence.  Skip() will ignore the specified number of items and return the rest of the sequence, whereas Take() will return the specified number of items and ignore the rest of the sequence.  Similarly, the SkipWhile() and TakeWhile() methods can be used to skip or take items, respectively, until a given predicate returns false.    Technorati Tags: C#, CSharp, .NET, LINQ, IEnumerable<T>, Skip, Take, SkipWhile, TakeWhile

    Read the article

  • Silverlight Cream for May 25, 2010 -- #869

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov, Victor Gaudioso, Phil Middlemiss, Jonathan van de Veen, Lee, and Domagoj Pavlešic. From SilverlightCream.com: Book Folding effect using Pixel Shader On the new CompleteIT site, did you know the page-folding was done using PixelShaders? I hadn't put much thought into it, but that's pretty cool, and Miroslav Miroslavov has a blog post up discussing it, and the code behind it. New Silverlight Video Tutorial: How to create a Slider with a ToolTip that shows the Value of the Slider This is pretty cool... Victor Gaudioso's latest video tutorial shows how to put the slider position in the slider tooltip... code and video tutorial included. Backlighting a ListBox Put this in the cool category as well... Phil Middlemiss worked out a ListBox styling that makes the selected item be 'backlit' ... check out the screenshot on the post... and then grab the code :) Adventures while building a Silverlight Enterprise application part #33 Jonathan van de Veen is discussing changes to his project/team and how that has affected development. Read about what they did right and some of their struggles. RIA Services and Storedprocedures Lee's discussing Stored Procs and RIA Services ... he begins with one that just works, then moves on to demonstrate the kernel of the problem he's attacking and the solution of it. DoubleClick in Silverlight Domagoj Pavlešic got inspiration from one of Mike Snow's Tips of the Day and took off on the double-click idea... project source included. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • Configuring Transmission for faster download

    - by Luis Alvarado
    I have tested on the same PC with the same torrent/magnet links the following Torrent Clients: Transmission Ktorrent Deluge qBittorrent Vuze After 7 days of testing I noticed that the only one that took longer to start downloading and to keep an optimum/max download speed was Transmission. It was the slowest of them all to download the same torrents or magnet links which I tested 8 torrents and 4 magnet links from different sites and the one that took the most to start downloading or start after a pause/resume event. The other 4 just took less than 2 seconds for example to start downloading and to download the same content between 50% less time to 80% less time. I think that Transmission has the same capabilities about downloading/resuming than the other torrent clients but it may be because of some configuration I need to do to get the same speed and effect than the others. In my tests all torrent clients were tested with their default configurations. No changes were made. They were tested on the same PC, with the same network connection in the same time periods. So I am thinking that Transmission just needs a little bit of configuration tunning. I also set the ports for use to the same one for each. Checked the router for any blocking and anything related to the network. What options can I change to make it so Transmission resumes a download faster (grabs the seeds faster) and keeps a fast download all the time (Stays with the seeds that offer the best connection for example). Both of which by the look of it are features that the rest of the torrent clients do already.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • The Middle of Every Project

    - by andrew.sparks
    I read a quote somewhere “The middle of every successful project looks like a mess.” or something to that effect. I suppose the projects where the beginning, middle and end are a mess are the ones you need to watch out for. Right now we are in ramp up of the maintenance/support teams at a big project in the Nordics. We are facing a year of mixed mode operations, where we have production operations and the phased rollout to new locations in parallel. The support team supports, and the deployment team deploys. As usual the assumption right up to about a month or so before initial go-live was that the deployment team would carry the support. Not! Consequently we had a last minute scramble over the Christmas/New Year to fire up a support/maintenance team. While it is a bit messy and not perfect – the quality of the mess (I mean scramble) is not so bad. Weekly operational review with the operational delivery managers, written issue lists and assigned actions, candid discussions getting the problems on the table and documented, issues getting solved and moved off the table. So while the middle of a project might look like a mess (even the start) it is methodical use of project management tools of checklists and scheduled communication points that are helping us navigate out of the mess and bring it all under control.

    Read the article

  • OpenGL: glGetError() returns invalid enum after call to glewInit()

    - by malymato
    I use GLEW and freeglut. For some reason, after a call to glewInit(), glGetError() returns error code 1280. Reinstalling the drivers didn't help. I tried to disable glewExperimental, it had no effect. Code worked before, but I am not aware of any changes I could possibly make. Here's my code: int main(int argc, char* argv[]) { GLenum GlewInitResult, res; InitWindow(argc, argv); res = glGetError(); // res = 0 glewExperimental = GL_TRUE; GlewInitResult = glewInit(); res = glGetError(); // res = 1280 glutMainLoop(); exit(EXIT_SUCCESS); } void InitWindow(int argc, char* argv[]) { glutInit(&argc, argv); glutInitContextVersion(4, 0); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutInitContextProfile(GLUT_CORE_PROFILE); glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_GLUTMAINLOOP_RETURNS); glutInitWindowPosition(0, 0); glutInitWindowSize(CurrentWidth, CurrentHeight); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); WindowHandle = glutCreateWindow(WINDOW_TITLE); GLenum errorCheckValue = glGetError(); if (WindowHandle < 1) { fprintf(stderr, "ERROR: Could not create new rendering window.\n"); exit(EXIT_FAILURE); } glutReshapeFunc(ResizeFunction); glutDisplayFunc(RenderFunction); glutIdleFunc(IdleFunction); glutTimerFunc(0, TimerFunction, 0); glutCloseFunc(Cleanup); glutKeyboardFunc(KeyboardFunction); } Could someone tell me what I am doing wrong? Thanks.

    Read the article

  • How to design good & continuous tiles

    - by Mikalichov
    I have trouble designing tiles so that when assembled, they don't look like tiles, but look like an homogeneous thing. For example on the image below: even though the main part of the grass is only one tile, you don't "see" the grid; you know where it is if you look a bit carefully, but it is not obvious. Whereas when I design tiles, you can only see "oh, jeez, 64 times the same tile". A bit like on that image: (taken from a gamedev.stackexchange question, sorry; no critic about the game, but it proves my point, and actually has better tile design that what I manage) I think the main problem is that I design them so they are independent, there is no junction between two tiles if put closed to each other. I think having the tiles more "continuous" would have a smoother effect, but can't manage to do it, it seems overly complex to me. I think it is probably simpler than I think once you know how to do it, but couldn't find a tutorial on that specific point. Is there a known method to design continuous / homogeneous tiles? (my terminology might be totally wrong, don't hesitate to correct me)

    Read the article

  • Can I make a Compiz animation rule for Wine menus?

    - by satuon
    Wine menus appear and disappear with the "Glide 2" effect. This looks good for dialogs but not so good for menus. Can I make a custom rule to apply fade in/out instead? I include the animation section from my exported Compiz settings (default values are skipped): [animation] s0_open_matches = ((type=Dialog | ModalDialog | Normal | Unknown) | name=sun-awt-X11-XFramePeer | name=sun-awt-X11-XDialogPeer) & !(role=toolTipTip | role=qtooltip_label) & !(type=Normal & override_redirect=1) & !(name=gnome-screensaver);(type=Menu | PopupMenu | DropdownMenu | Dialog | ModalDialog | Normal);(type=Tooltip | Notification | Utility) & !(name=compiz) & !(title=notify-osd); s0_close_effects = animation:Glide 2;animation:Fade;animation:None; s0_close_matches = ((type=Dialog | ModalDialog | Normal | Unknown) | name=sun-awt-X11-XFramePeer | name=sun-awt-X11-XDialogPeer) & !(role=toolTipTip | role=qtooltip_label) & !(type=Normal & override_redirect=1) & !(name=gnome-screensaver);(type=Menu | PopupMenu | DropdownMenu | Dialog | ModalDialog | Normal);(type=Tooltip | Notification | Utility) & !(name=compiz) & !(title=notify-osd);

    Read the article

  • Quantifying the value of refactoring in commercial terms

    - by Myles McDonnell
    Here is the classic scenario; Dev team build a prototype. Business mgmt like it and put it into production. Dev team now have to continue to deliver new features whilst at the same time pay the technical debt accrued when the code base was a prototype. My question is this (forgive me, it's rather open ended); how can the value of the refactoring work be quantified in commercial terms? As developers we can clearly understand and communicate the value in technical terms, such a the removal of code duplication, the simplification of an object model and so on. But this means little to an executive focussed on the commercial elements. What will mean something to this executive is the dev. team being able to deliver requirements at faster velocity. Just making this statement without any metrics that clearly quantify return on investment (increased velocity in return for resource allocated to refactoring) carries little weight. I'm interested to hear from anyone who has had experience, positive or negative, in relation to the above. ----------------- EDIT ---------------- Thanks for the responses so far, all of which I think are good. What I want to develop is a metric that proves (or disproves!) all of these statements. A report that ties velocity to refactoring and shows a positive effect.

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • Can thousands of backlinks from the same site harm PageRank?

    - by Dejan Pelzel
    I just noticed that one particular site has almost 7000 backlinks linking back to our website. The site is something like a news aggregator and for each post they created around 20 (sometimes much more) backlinks back to our page and they basically linked over 400 pages. I am beginning to get concerned that this amount of links might harm our page. They seem to have more backlinks to our page than all the other pages combined and more backlinks that our website has pages. We have seen a massive negative effect going on for quite a while and the PageRank seems to have dropped to None (Not even 0). But I am not sure when and why exactly that happened seeing that PageRank updates take quite a while to appear. The site linking to us is otherwise pretty reputable and doesn't seem to be having any problems with their rank. (PR 6 actually) I was thinking of using the Google disavow tool for this site, but I don't want to make things even worse. Do you think these are harmful? If so, how do I fix this? Thanks :)

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • Should I make the Cells in a Tiledmap as null when my player hits it

    - by Vishal Kumar
    I am making a Tile Based game using Libgdx. I took the idea from SuperKoalio platformer demo by Mario Zencher. When I wanted to implement Collectables in my game , I simply draw the coins using Tiled Map Editor. When my player hits that, I use to set that cell as null. Someday on this site suggested me not to do so... never use null. I agreed. What can be any other way. If I am using layer.setCell(x,y) to set the cell to any other cell... even if an transparent one .. my player seems to be stopped by an invisible object/hurdle. This is my code: for (Rectangle tile : tiles) { if (koalaRect.overlaps(tile)) { TiledMapTileLayer layer = (TiledMapTileLayer) map.getLayers().get(1); try{ type = layer.getCell((int) tile.x, (int) tile.y).getTile().getProperties().get("tileType").toString(); } catch(Exception e){ System.out.print("Exception in Tiles Property"+e); type="nonbreakable"; } //Let us destroy this cell if(("award".equals(type))){ layer.setCell((int) tile.x, (int) tile.y, null); listener.coin(); score+=100; test = ""+layer.getCell(0, 0).getTile().getProperties().get("tileType"); } //DOING THIS GIVES A BAD EFFECT if(("killer".equals(type))){ //player.health--; //layer.setCell((int) tile.x, (int) tile.y, layer.getCell(20,0)); } // we actually reset the player y-position here // so it is just below/above the tile we collided with // this removes bouncing :) if (player.velocity.y > 0) { player.position.y = (tile.y - Player.height); } Is this a right approach? OR I should create separate Sprite Class called Coin.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >