Search Results

Search found 28411 results on 1137 pages for 'think'.

Page 32/1137 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Help deploying using Capistrano to HostGator

    - by Kyle Macey
    My company uses HostGator to host our web sites, and I'm having a heck of a time figuring out what my final steps are to get a functioning RoR app up there. I've got all the way up to configuring mongrel (I think?) and being able to run deploy:cold without any errors. However, I can't seem to get the app to show up in the designated CPanel area (HG says the name "current" is already reserved for another application), and I'm not sure which port was allowed for me to use. I've opened tickets with Customer Support just to be told that "You can't access the database with root"... Totally unrelated to my question... So I think I'm in the final stretch and if anyone has any insight or experience with HostGator, please cue me in.

    Read the article

  • Difference in fans on Windows 7 and Ubuntu 12.04

    - by Timo
    I have bought a Hp Pavilion DV7 with Core i7 CPU and installed Ubuntu 12.04 on dual boot with Windows 7 alongside. Apart from the difference in battery life (although that's fixed with Jupiter), I have another problem with the fans. On Windows my fans work perfectly and the laptop is cool, but it seems to overheat in Ubuntu. It becomes quite hot and it looks like my fans are not working under Ubuntu. I think I'm having the same problem as How can I keep the cpu temp low?, but since I cannot comment because of the lack of reputation (?), I post the question as a new thread. I think the result of the overheating is that my keyboard doesn't seem to follow when I start typing a long text. It just freezes and types the last letter multiple times. For example: when I type the word freezes, it shows freeeee so the zes changed into eee...

    Read the article

  • Writing low latency Java

    - by user997112
    Are there any Java-specific techniques (things which wouldnt apply to C++) for writing low latency code, in Java? I often see Java low latency roles and they ask for experience writing low latency Java- which sometimes seems a little bit of an oxymoron. The only think I could think of is experience with JNI, outsourcing I/O calls to native code. Also possibly using the disruptor pattern, but thats not an actual technology. Are there any Java specific tips for writing low latency code? I am aware there is a Real Time Java Spec, but I have been warned real-time is not the same as low latency....

    Read the article

  • How to create a good sitemap for dynamic website

    - by Saif Bechan
    I have a website with dynamic content and different kind of pages. I have some pages that rarely change, and I have pages like blogs that change often. The blog pages also have links for sorting, for example sorting on date, asc, desc. On some of the pages I also have links to different tabbed content, and links that are just anchor links. Now when I use a xml sitemap generator then all the links are thrown into the site, and so I don't think all the links are really relevant. The blogposts up until now are also taken into the sitemap. Is this really necessary? I think the links to the blogposts can be indexed just fine. Is the best way to make a sitemap just to manually assign the main menu links to the sitemap, or is indexing everything really recommended?

    Read the article

  • Interesting conversation about the nature of info-wars

    - by Malcolm Anderson
    Over at Schlock Mercenary, Howard Taylor has started a facinating conversation on the nature of Info-Wars. As Howard puts it:   Somebody (I forget who) tweeted that the Wikileaks fight right now is the first infowar in history. I disagree. I think we've fought numerous infowars in the last fifteen years. And that's really what I want to see discussed in the comments. We can argue right and wrong until the eCows come 127.0.0.1 but nobody is going to walk away convinced. I want to see a list of information-age conflicts that you feel qualify as "infowar." Me, I think the RIAA vs file-sharing qualifies. My buddy Rodney suggested RBLs vs Spammers (the spammers won that one.) Somebody pointed out that the Secret Service raid on Steve Jackson Games back in the 80's might qualify.

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • Comparing Checksums

    - by Sean Feldman
    This is something trivial, yet got me to think for a little. I had two checksums, one received from a client invoking a service, another one calculated once data sent into service is received. Checksums are plain arrays of bytes. I wanted to have comparison to be expressed as simple as possible. Quick google search brought me to a post that dealt with the same issue. But linq expression was too chatty and I think the solution was a bit muddy. So I looked a bit more into linq options presented in the post, and this is what ended up using: var matching = original_checksum.SequenceEqual(new_checksum); Sometimes things are so simple, we tend to overcomplicate them.

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

  • library put in /usr/local/lib is not loaded

    - by IARI
    Let me state in advance: One might think this question would is for server fault, but I think is is Ubuntu (config) specific. In short: I have put libwkhtmltox.so in /usr/local/lib as stated in installation instructions linked below, but it appears the library is not loaded. I am trying to install php-wkhtmltox, a php extension for wkhtmltox on my local desktop (Ubuntu 12.04). I have extracted the source and changed to the corresponding directory. After running phpize, ./configure fails at checking for libwkhtmltox support... yes, shared not found configure: error: Please install libwkhtmltox I suspect the reason the library is not loaded is that the path is not checked!? how do I proceed? Here are instructions I followed: http://davidbomba.com/index.php/2011/08/04/php-wkhtmltox/ http://roundhere.net/journal/install-wkhtmltopdf-php-bindings/

    Read the article

  • Building TrueCrypt on Ubuntu 13.10

    - by linuxubuntu
    With the whole NSA thing people tried to re-build identically looking binaries to the ones which truecrypt.org provides, but didn't succeed. So some think they might be compiled with back-doors which are not in the source code. - So how compile on the latest Ubuntu version (I'm using UbuntuGNOME but that shouldn't matter)? I tried some tutorials for previous Ubuntu versions but they seem not to work any-more? edit: https://madiba.encs.concordia.ca/~x_decarn/truecrypt-binaries-analysis/ Now you might think "ok, we don't need to build", but: To build he used closed-source software and there are proof-of-concepts where a compromised compiler still put backdoors into the binary: 1. source without backdoors 2. binary identically to the reference-binary 3. binary contains still backdoors

    Read the article

  • Is functional programming a superset of object oriented?

    - by Jimmy Hoffa
    The more functional programming I do, the more I feel like it adds an extra layer of abstraction that seems like how an onion's layer is- all encompassing of the previous layers. I don't know if this is true so going off the OOP principles I've worked with for years, can anyone explain how functional does or doesn't accurately depict any of them: Encapsulation, Abstraction, Inheritance, Polymorphism I think we can all say, yes it has encapsulation via tuples, or do tuples count technically as fact of "functional programming" or are they just a utility of the language? I know Haskell can meet the "interfaces" requirement, but again not certain if it's method is a fact of functional? I'm guessing that the fact that functors have a mathematical basis you could say those are a definite built in expectation of functional, perhaps? Please, detail how you think functional does or does not fulfill the 4 principles of OOP.

    Read the article

  • How to change speed without changing path travelled?

    - by Ben Williams
    I have a ball which is being thrown from one side of a 2D space to the other. The formula I am using for calculating the ball's position at any one point in time is: x = x0 + vx0*t y = y0 + vy0*t - 0.5*g*t*t where g is gravity, t is time, x0 is the initial x position, vx0 is the initial x velocity. What I would like to do is change the speed of this ball, without changing how far it travels. Let's say the ball starts in the lower left corner, moves upwards and rightwards in an arc, and finishes in the lower right corner, and this takes 5s. What I would like to be able to do is change this so it takes 10s or 20s, but the ball still follows the same curve and finishes in the same position. How can I achieve this? All I can think of is manipulating t but I don't think that's a good idea. I'm sure it's something simple, but my maths is pretty shaky.

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Pair Programming: Pros and Cons

    - by O.D
    Hi I need some experience reporting from the ones who have done pair programming,i notice that lots of people recommend that but my experience was that at one point its more efficient to set alone, think and then write code than to talk with the other programmer (which can be very annoying to other programmers in the same office), do you agree to this? and if yes can you mention situations where pair programing is less efficient than traditional programing? Actually im more interested in Cons than in Pros, but if its your own experience i would like to read both, the Cons and the Pros. I would like to read what you think about the Programmer who does'nt have the keyboard, what can he do in the meanwhile other than talking about the concept? or checking the code on the screen? Thank you

    Read the article

  • Visual Studio 2010 Is Here!

    I think back to the days of the first versions of Visual Studio (when it was called Visual Studio .NET, remember?) and I think about how far Microsoft has come with this IDE. It really is the best IDE on the market. There is so much to this IDE it is amazing. It now can really handle managing your complete software application development lifecycle. For me, it is (besides Windows 7) the best and most successful product Microsoft has developed. You can obviously get this now and it is available on...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is there any reason to use "container" classes?

    - by Michael
    I realize the term "container" is misleading in this context - if anyone can think of a better term please edit it in. In legacy code I occasionally see classes that are nothing but wrappers for data. something like: class Bottle { int height; int diameter; Cap capType; getters/setters, maybe a constructor } My understanding of OO is that classes are structures for data and the methods of operating on that data. This seems to preclude objects of this type. To me they are nothing more than structs and kind of defeat the purpose of OO. I don't think it's necessarily evil, though it may be a code smell. Is there a case where such objects would be necessary? If this is used often, does it make the design suspect?

    Read the article

  • Will high reputation in Programmers help to get a good job?

    - by Lorenzo
    In reference to this question, do you think that having a high reputation on this site will help to get a good job? Aside silly and humorous questions, on Programmers we can see a lot of high quality theory questions. I think that, if Stack Overflow will eventually evolve in "strictly programming related" (which usually is "strictly coding related"), the questions on Programmers will be much more interesting and meaningful ("Stack Overflow" = "I have this specific coding/implementation issue"; "Programmers" = "Best practices, team shaping, paradigms, CS theory"). So could high reputation on this site help (or at least be a good reference)? And then, more o less than Stack Overflow?

    Read the article

  • Structure gameobjects and call events

    - by waco001
    I'm working on a 2D tile based game in which the player interacts with other game objects (chests, AI, Doors, Houses etc...). The entire map will be stored in a file which I can read. When loading the tilemap, it will find any tile with the ID that represents a gameobject and store it in a hashmap (right data structure I think?). private static HashMap<Integer, Class<GameObject>> gameObjects = new HashMap<Integer, Class<GameObject>>(); How exactly would I go about calling, and checking for events? I figure that I would just call the update, render and input methods of each gameobject using the hashmap. Should I got towards a Minecraft/Bukkit approach (sorry only example I can think of), where the user registers an event, and it gets called whenever that event happens, and where should I go as in resources to learn about that type of programming, (Java, LWJGL). Or should I just loop through the entire hashmap looking for an event that fits? Thanks waco

    Read the article

  • Pair Programming: Pros and Cons

    - by O.D
    I need some experience reporting from the ones who have done pair programming, I noticed that lots of people recommend it but my experience was that at one point it's more efficient to sit alone, think and then write code than to talk with the other programmer (which can be very annoying to other programmers in the same office), do you agree to this? and if yes can you mention situations where pair programming is less efficient than traditional programming? Actually, I'm more interested in Cons than in Pros, but if it's your own experience I would like to read both, the Cons and the Pros. I would like to read what you think about the Programmer who doesn't have the keyboard, what can he do in the meanwhile other than talking about the concept? or checking the code on the screen?

    Read the article

  • How to check command line requests routed thru TOR?

    - by Chris
    I think I have virtualbox set up so that all traffic originating from the guest OS is routed through TOR. TOR is only installed on the host OS and the web browsers on the guest OS all report TOR IP's in Europe when I test with seemyip.com. The host OS shows my real IP when going to the same sites. But I initiate a lot of requests from the Linux command line and I haven't been able to confirm to my satisfaction that these are routing through TOR, though I have no reason to think they do not. When I use this command I get no output from the guest OS, but my real IP from the host OS: dig myip.opendns.com @resolver1.opendns.com +short This bash file gives no output from either OS: #!/bin/bash echo Your external IP Address is: wget http://Www.whatismyip.com -O - -o /dev/null | grep '<TITLE>' | sed -r 's/<TITLE>WhatIsMyIP\.com \- //g' | sed -r 's/<\/TITLE>//g' exit 0 Suggestions?

    Read the article

  • Which is more maintainable -- boolean assignment via if/else or boolean expression?

    - by Bret Walker
    Which would be considered more maintainable? if (a == b) c = true; else c = false; or c = (a == b); I've tried looking in Code Complete, but can't find an answer. I think the first is more readable (you can literally read it out loud), which I also think makes it more maintainable. The second one certainly makes more sense and reduces code, but I'm not sure it's as maintainable for C# developers (I'd expect to see this idiom more in, for example, Python).

    Read the article

  • Why should we use low level languages if a high level one like python can do almost everything? [closed]

    - by killown
    I know python is not suitable for things like microcontrolers, make drivers etc, but besides that, you can do everything using python, companys get stuck with speed optimizations for real hard time system but does forget other factors which one you can just upgrade your hardware for speed proposes in order to get your python program fit in it, if you think how much cust can the company have to maintain a system written in C, the comparison is like that: for example: 10 programmers to mantain a system written in c and just one programmer to mantain a system written in python, with python you can buy some better hardware to fit your python program, I think that low level languages tend to get more cost, since programmers aren't so cheaply than a hardware upgrade, then, this is my point, why should a system be written in c instead of python?

    Read the article

  • What's the default traditional Chinese font?

    - by janoChen
    The only fonts that can render Chinese text are: WenQuanYi Micro Hei, WenQuanYi Micro Hei Mono, Droid Sans (I think is unicode), FreeSans (I think is unicode too). Changing Chinese text to Sans, FreeSans, Droid Sans render the same font). WenQuanYi Micro Hei, WenQuanYi Micro Hei Mono render 'bolder' Chinese text. EDIT: What I discovered so far: Is not WenQuanYi Micro Hei, WenQuanYi Micro Hei, Droid Sans Fallback (Droid with CJK support). It can only be FreeSans, or Deja vu Sans. I'm not sure which one is being used as default one (clean installation) Any idea?

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • Programmers that need a lot of "Outside Help" - Is this bad?

    - by Zanneth
    Does anyone else think it's kind of tacky or poor practice when programmers use an unusual amount of libraries/frameworks to accomplish certain tasks? I'm working with someone on a relatively simple programming project involving geolocation queries. The guy seems like an amateur to me. For the server software, this guy used Python, Django, and a bunch of other crazy libraries ("PostGIS + gdal, geoip, and a few other spatial libraries" he writes) to create it. He wrote the entire program in one method (in views.py, nonetheless facepalm), and it's almost unreadable. Is this bad? Does anyone else think that this is really tacky and amateurish? Am I the only minimalist out there these days?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >