Search Results

Search found 20065 results on 803 pages for 'practice problems'.

Page 99/803 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Best practice on Linux servers and CPU/power throttling?

    - by Valentin
    I am running a couple of Debian 6 (2.6.32) and 7 (3.2) Linux servers and all of them have energy saving settings enabled in their BIOS. Furthermore Linux shows that the CPUs are throttled if the servers are idling. I wonder if this could cause any harm - could there be e.g. performance impacts because Linux would not be able to handle throttling correctly? Is there a best practice for Linux servers and power/CPU throttling? Do you guys switch your energy profiles to "performance" or do you leave both the BIOS and the OS with their default settings? The reason I am asking is that I encountered several performance issues on physical Dell servers although all values (CPU/load, memory, I/O, network etc.) seemed to be normal. After changing the BIOS power settings to "performance" in those specific cases, I was able to get rid of the performance issues.

    Read the article

  • Best practice for ONLY allowing MySQL access to a server?

    - by Calvin Froedge
    Here's the use case: I have a SaaS system that was built (dev environment) on a single box. I've moved everything to a cloud environment running Ubuntu 10.10. One server runs the application, the other runs the database. The basic idea is that the server that runs the database should only be accessible by the application and the administrator's machine, who both have correct RSA keys. My question: Would it be better practice to use a firewall to block access to ALL ports except MySQL, or skip firewall / iptables and just disable all other services / ports completely? Furthermore, should I run MySQL on a non-standard port? This database will hold quite sensitive information and I want to make sure I'm doing everything possible to properly safeguard it. Thanks in advance. I've been reading here for a while but this is the first question that I've asked. I'll try to answer some as well = )

    Read the article

  • Running OpenVZ virtual servers within a Xen XCP vritual server? Bad practice?

    - by Damainman
    I have a 1 server with 8GB RAM and 2xQuadcore Processors. It currently has the Xen XCP installed on it, and centos6.2 x64 running on a virtual machine. I have a server control panel software that I want to use and it allows the administration via a web interface for Openvz machines. My questions are: Would this be considered bad practice? Would there be a big performance hit? Should I avoid this all together or am I going about it all wrong? Thank you in advance.

    Read the article

  • How long does it take in practice to warm up large in-memory databases?

    - by Sim
    Companies such as Peak Hosting are offering 64 core machines with 512Gb RAM for $2K/month. This is a very interesting choice for in-memory databases such as Memcached/Redis as well as databases whose performance degrades rapidly when the data & indexes don't fit in RAM, such as MongoDB. My main concern with monster machines such as these is the time it takes to warm up an in-memory database. In my experience, theoretical metrics, e.g., that SATA can load 100Mb/sec, fall short of what happens in practice. Even at that rate, 100Mb/sec means that loading up 512Gb RAM machine from SATA disks can take over 1 1/2 hours (!). I am looking for real-world reports of warm-up times for machines with very large memory. Please, share details of the software on the machine, data size, storage configuration, e.g., SATA or SSD, network, hosting/cloud provider, if relevant, etc.

    Read the article

  • Are there any resources on how to identify problems that could best be solved with templates?

    - by sap
    I decided to improve my knowledge in template meta-programming. I know the syntax and rules and been playing with counteless examples from online resources. I understand how powerful templates can be and how much compile time optimization they can provide but I still cant "think in templates", I can't seem to know by myself if a certain problem could be best solved with templates and if it can, how to adapt that problem to templates. Is there some kind of online resource or book that teaches how to identify problems that could best be solved with templates and how to adapt that problem?

    Read the article

  • Should I be worried if I solve a lot of my problems the same way?

    - by Bryan Harrington
    I really enjoy programming games and puzzle creators/games. I find myself engineering a lot of these problems the same way and ultimately using similar technique to program them that I'm really comfortable with. To give you brief insight, I like to create graphs where nodes are represented with objects. These object hold data such as coordinates, positions and of course references to other neighboring objects. I'll place them all in a data structure and make decisions on this information in a "game loop". While this is a brief example, its not exact in all situations. It's just one way I feel really comfortable with. Is this bad?

    Read the article

  • How to solve problems with movement in simple tile based multiplayer game?

    - by Murlo
    I'm making a simple tile based 2D multiplayer game in JavaScript using socket.io where you can move one tile every 200 ms. The two solutions I've tried are as follows: The client sends "walk one tile north" every 200 ms. Problem: People can easily hack the client to send the action more often. The client sends "walking north" and "stopped walking". Problem: Sometimes the player moves extra steps when "stopped walking" doesn't arrive in time. Do you know a way around these problems or is there a better way to do it? EDIT: Regarding the first solution I've tried adding validation on the server to check if it has been 200 ms since last movement. The problem is that latency still encourages people just to spam the action as much as possible, giving them an unfair advantage.

    Read the article

  • How do .so files avoid problems associated with passing header-only templates like MS dll files have?

    - by Doug T.
    Based on the discussion around this question. I'd like to know how .so files/the ELF format/the gcc toolchain avoid problems passing classes defined purely in header files (like the std library). According to Jan in that answer, the dynamic linker/loader only picks one version of such a class to load if its defined in two .so files. So if two .so files have two definitions, perhaps with different compiler options/etc, the dynamic linker can pick one to use. Is this correct? How does this work with inlining? For example, MSVC inlines templates aggressively. This makes the solution I describe above untenable for dlls. Does Gcc never inline header-only templates like the std library as MSVC does? If so wouldn't that make the functionality of ELF described above ineffective in these cases?

    Read the article

  • How to avoid problems when installing Ubuntu and Windows 7 in dual-boot?

    - by BlaXpirit
    I want to try out Ubuntu (and hopefully choose it as my primary OS). After looking at many versions of it in VirtualBox and from Live CD, I've finally decided to install it. So I defragmented and shrinked one of the partitions to make room for Ubuntu. My current setup (after shrinking the D: partition): [·100 MB·] [······250000 MB······] [·······600000 MB·······] [··100000 MB···] Reserved Windows 7 system (C:) Data (D:) Free space NTFS NTFS NTFS (for Ubuntu) The Internet (including AskUbuntu) is full of scary stories about Windows not loading after installation of Ubuntu, something about installing GRUB to a wrong partition, etc. As I am a newbie to Linux and Ubuntu, it is very easy for me to do something wrong. Please mention the problems that may appear and explain how to avoid them. Ubuntu version that will be installed: 10.10 Desktop amd64 Please note that I have installed Windows 7 about a year ago, so I have much to lose if something goes wrong. I want to be very careful because there is no way for me to backup all the data.

    Read the article

  • Good source for interview-style coding problems for entry/intermediate developers?

    - by soster
    I have taught myself to code over the past few years and do not have a computer science degree. As a result, I lack experience from many things, such as the basic homework/test questions many CS graduates take for granted. I recently had a tech screen interview where I fumbled and struggled to finish a (relatively) common question, I believe due to this inexperience. My question to all of you is this: do you know a good source for a bunch of these problems that includes answers, for an entry/intermediate developer who is trying to gain coding problem solving experience? The ones I've been able to find on the internet are for coding teams, so they're a bit too complicated for me. Thanks so much in advance.

    Read the article

  • How do you keep from running into the same problems over and over?

    - by Stephen Furlani
    I keep running into the same problems. The problem is irrelevant, but the fact that I keep running into is completely frustrating. The problem only happens once every, 3-6 months or so as I stub out a new iteration of the project. I keep a journal every time, but I spend at least a day or two each iteration trying to get the issue resolved. How do you guys keep from making the same mistakes over and over? I've tried a journal but it apparently doesn't work for me. [Edit] A few more details about the issue: Each time I make a new project to hold the files, I import a particular library. The library is a C++ library which imports glew.h and glx.h GLX redefines BOOL and that's not kosher since BOOL is a keyword for ObjC. I had a fix the last time I went through this. I #ifndef the header in the library to exclude GLEW and GLX and everything worked hunky-dory. This time, however, I do the same thing, use the same #ifndef block but now it throws a bunch of errors. I go back to the old project, and it works. New project no-worky. It seems like it does this every time, and my solution to it is new each time for some reason. I know #defines and #includes are one of the trickiest areas of C++ (and cross-language with Objective-C), but I had this working and now it's not.

    Read the article

  • Tell me why I should bother using Linux if it's all about problems getting the OS to install or work properly? [closed]

    - by Vilhjalmur Magnussin
    Why should I spend day's trying to get Ubuntu to either install and/or work properly? I'm using an Acer Timeline X laptop and if I install 10.04 the wireless doesn't work, and if I try installing 11.04 it either won't install, or if it installs it's full of bugs causing my computer to freeze all the time. So please, I'm all open ears. Someone give me one or two good reasons to continue wasting time (in hope it eventually works) before I decide to focus my time on other things like productivity (using Windows like I've been doing successfully the last 10 years). This is the second time I give Ubuntu a try, the first time was in 2010 using Ubuntu Studio and Ubuntu Desktop, and it ended with me shifting back to Windows since I had spent more time getting everything to work than actually working while trying Ubuntu. I really don't understand why it needs to be like this. Why go on trying when all I see is forums full of discussions about problems which people are having difficulties fixing. Or maybe there is just one special type of computer which works well with Linux? Would very much like to know which computer that is. SO please, if it's not to much trouble I really want to here from someone who has something good to say about going through all this trouble just to get a working environment up and running since I already have a working environment up and running called Windows. Thanks, Villi.

    Read the article

  • Suggestion of device WiFi range in it's spec. Possible?

    - by SeeR
    I have router Draytek Vigor 2100VG at home almost in the center of it. The farthest point at my home is my balcony, ~12m from it. I have constant wifi signal range problems with some of my devices, but not with others. Notebook Lenovo W510 - no problems Nokia Home Music - always on 10m - no problems Sony PS3 - always on 7m - no problems Sony tablet S - problems around 6m Sony PSP - problems around 8m Sony PS Vita - problems around 8m Nokia E63 - problems around 8m I'm curious why my Notebook don't have any problems even on the balcony? I guess it has better hardware or uses more power for transmission. This information is really important when you want to buy new device/computer, so my real question is: Can device wifi range can somehow be found/suggested from official hardware technical specification? If not Is there some web page with wifi range reviews?

    Read the article

  • What to do as a new team lead on a project with maintainability problems?

    - by Mr_E
    I have just been put in charge of a code project with maintainability problems. What things can I do to get the project on a stable footing? I find myself in a place where we are working with a very large multi-tiered .NET system that is missing a lot of the important things such as unit tests, IOC, MEF, too many static classes, pure datasets etc. I'm only 24 but I've been here for almost three years (this app has been in development for 5) and mostly due to time constraints we've been just adding in more crap to fit the other crap. After doing a number of projects in my free time I have begun to understand just how important all those concepts are. Also due to employee shifting I find myself to now be the team lead on this project and I really want to come up with some smart ways to improve this app. Ways where the value can be explained to management. I have ideas of what I would like to do but they all seem so overwhelming without much upfront gain. Any stories of how people have or would have dealt with this would be a very interesting read. Thanks.

    Read the article

  • Is my site hacked, or does Google have problems? [duplicate]

    - by Bondye
    Possible Duplicate: Titles in Google results contain spammy prefixes I have a webshop online and I have some problems with redirecting from Google. Case 1 When I Google for my site at google.com in Iron SWR (safe Chrome version) and I click the first link I get the correct page. Case 2 When I Google for my site at google.nl in Iron SWR (safe Chrome version) and I click the first link Google will redirect me to a spam site. Case 3 When I Google for my site in Google Chrome and I click the first link Google will redirect me to a spam site. Case 4 When I Google for my site in FireFox and I click the first link Google will redirect me to a spam site. Case 5 When I Google for my site in Internet Explorer and I click the first link Google will redirect me a page that tells me the site is offline. HELP WHAT TO DO? I checked the .htaccess but this file is correct. I checked the index.php file but this one is also correct. What can I do? Hacked or does Google has trouble?

    Read the article

  • What is best practice (and implications) for packaging projects into JAR's?

    - by user245510
    What is considered best practice deciding how to define the set of JAR's for a project (for example a Swing GUI)? There are many possible groupings: JAR per layer (presentation, business, data) JAR per (significant?) GUI panel. For significant system, this results in a large number of JAR's, but the JAR's are (should be) more re-usable - fine-grained granularity JAR per "project" (in the sense of an IDE project); "common.jar", "resources.jar", "gui.jar", etc I am an experienced developer; I know the mechanics of creating JAR's, I'm just looking for wisdom on best-practice. Personally, I like the idea of a JAR per component (e.g. a panel), as I am mad-keen on encapsulation, and the holy-grail of re-use accross projects. I am concerned, however, that on a practical, performance level, the JVM would struggle class loading over dozens, maybe hundreds of small JAR's. Each JAR would contain; the GUI panel code, necessary resources (i.e. not centralised) so each panel can stand alone. Does anyone have wisdom to share?

    Read the article

  • Best practice for managing changes to 3rd party open source libraries?

    - by Jeff Knecht
    On a recent project, I had to modify an open source library to address a functional deficiency. I followed the SVN best practice of creating a "vendor source" repository and made my changes there. I also submitted the patch to the mailing list of that project. Unfortunately, the project only has a couple of maintainers and they are very slow to commit updates. At some point, I expect the library to be updated, and I expect that my project will want to use the upgraded library. But now I have a potential problem... I don't know whether my patch will have been applied to this future release of the 3rd party library. I also don't know whether my patch will even still be compatible with the internal implementation of the upgraded components. And in all likelihood, someone else will be maintaining my project by that point. Should I name the library in a special way so it is clear that we made special modifications (eg. commons-lang-2.x-for-my-project.jar)? Should I just document the patch and reference the SVN location and a link to the mailing list item in a README? No option that I can think of seems to be fool-proof in an upgrade scenario. What is the best practice for this?

    Read the article

  • What Problems Are Better Solved By SOAP Over REST?

    In the battle for web service supremacy SOAP and REST have been battling for years. In my personal opinion this debate should have never existed. Yes, both forms can be used to create an interactive web service, but each form of a service was developed independent of each other to solve two different yet similar problems. Based my research and experience I would have to say that REST should be the preferred web service methodology and SOAP should only be used in specific situations. Note, I did not say that I was against SOAP, and in fact I actually like to use SOAP when it is needed. Criteria for using SOAP: Does the service need a guaranteed level of reliability and security? Did the provider and consumer of the service agreed on a standardized data exchange format? Does the service need data context and state management? If you answer yes to any of these questions, then you may want to consider SOAP as the format for the web service. Another way to look at the relationship between REST and SOAP is to look at the medical field.  For most things a general doctor or you family health care provider can acceptably treat most conditions from the case of a common cold to a broken bone. A general doctor more aligns with REST in my opinion because for most service requirements REST fulfills a projects needs, but what happens if you need more of an advanced examination, you would go to a specialist. A specialist would already have experience dealing with specific issues that you are experiencing giving them specific context to how best treat you going forward. SOAP acts more like a specialist doctor giving that they understand the context of an issue and can treat it based on the state of other patients they have already treated. An example of where I would use SOAP over REST in real life would be a single sign-on application. I n these cases I need to check validate a username and password for authentication and authorization of a web page request. This service would need to maintain state while it authenticated a user and while it validated access to a web page on a subsequent request. This service must process every request for access and not allow caching to ensure that every request is processed and the appropriate users are allowed to view selected web pages. References: Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • What is the 'best practice' for installing perl modules on Solaris/OpenSolaris?

    - by AndrewR
    I'm currently in the process of writing setup instructions for some software I've written that is implemented as a set of Perl modules. Having done this for various flavours of Linux, I'm now doing the same for Solaris/OpenSolaris (v10 only). Part of the setup process is to make sure that dependent Perl modules are installed. This has been pretty easy on Linux as the Perl modules I require tend to be within the distro's packaging system (eg yum install perl-Cache-Cache). This is not the case on Solaris so I'm working on setup instructions that use the CPAN module to fetch dependent modules (eg perl -MCPAN -e 'install Cache::Cache'). This works ok but there are known problems with modules that require things to be built with a C compiler. The problem is that the C Makefile generated assumes you're using Sun's compiler and uses command-line options not understood by gcc, which you may be using instead. Consulting teh Internetz has thrown up a number of solutions to this: Install and use Sun's compiler Use the perlgcc wrapper script Edit the makefiles by hand (yuk) All of these work. My question to those more familiar with Solaris than me is: Is one of these the 'best' or 'most commonly used' method?

    Read the article

  • Problems with XP, Office, and PC in general - any ideas?

    - by molecule
    Hi all This may not make a whole lot of sense so pls bear with me... I am about to perform a routine check on one of my user's PC. Some background - the PC has a Xeon processor and 4Gb of RAM and running XP SP3 He has 2xHDD and pagefile is hosted on the secondary HDD (D:) and min/max values are set to 4096. NO pagefile on C: This user has 6 monitors so he has an NVIDIA Quadro NVS440 hosting 4xmonitors and an NVIDIA Quadro NVS290 hosting 2xmonitors. There is a video card driver from NVIDIA which is compatible with both NVS440 and NVS290 and he is on the latest version of that driver. (Note: Make of video cards are different - one is from leadtek and the other from Nvidia) He is a heavy Bloomberg, Outlook, Word, and Excel user and runs two Citrix applications. Other apps are FoxIt PDF and IE. Problems - Outlook and Excel frequently crashes - I am going to perform an Outlook and Excel repair and also check/remove unnecessary addins - will he lose any customizations if I repaired and chose "Restore my shortcuts while repairing" and do not select "Discard my customized settings and restore default settings". Does repair really repair anything? FYI - It stopped crashing ever since i moved a large spreadsheet he has open to his local HDD instead of over the network. This spreadsheet "refreshes" constantly as it is pulling live data to update cells and I suspect it was auto-saving so frequently that it caused crashes if saving over the network. At times, his right click completely fails to respond. His left click works fine but he can't right click on anything in any Window and even on the desktop. Sometimes, he needs to start to close certain applications such as Adobe and the right click will start functioning again. I removed Adobe and installed FoxIt as I figured it was a resource issue but I do not think so as he does have sufficient resources when the problem is happening. Sometimes he can't bring task manager up until he kills certain apps. Definitely sounds like a resource issue but I am not confident that is the root cause. Also not sure if this is related to one of the apps installed but his Start bar flickers (does not completely disappear) intermittently from time to time. The taskbar icons which are hidden appear and then get hidden again as if it was having "fits". I have performed reg scans, malware scans etc but problems do not go away. I am planning to perform sfc /scannow and office repair but would like to know if anyone has any other suggestions. What about setting a "small" pagefile on C:. I have heard that this is recommended and may be the reason why a minidmp file was not generated when he encountered a blue screen. Also, any feedback on his video cards? Do you think different models would cause problems? The drivers seem to work but he only has 2.5Gb out of 4Gb available RAM as I believe the video card chomped up a portion of this. I have recommended creating a new profile for him but due to the amount of customisations he has and the amount of time and effort it will take to get him up and running again, he prefers to bear with the problem than to go down that path. However, at least once a week, his PC acts up and I can't think of any other tools or techniques to rectify his problems. I guess we are at a stage where we just want to "stabilize" things so he won't encounter issues that frequently. Any feedback is very much appreciated.

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >