Search Results

Search found 31410 results on 1257 pages for 'disk based'.

Page 787/1257 | < Previous Page | 783 784 785 786 787 788 789 790 791 792 793 794  | Next Page >

  • Setting up Tornado with Nginx on Ubuntu 10.04 for production use

    - by DjangoRocks
    Hi all, I understand that there's an nginx configuration file at http://www.friendfeed.com But i don't really know how to set up Tornada for production use on Ubuntu 10.04 with Nginx. Here's my situation and assumptions: 1) Assuming my Tornado project is set up as such: project/ src/ static/ templates/ project.py And I have installed Tornado by downloading the repositary from Github and than sudo python setup.py install 2) I've installed Nginx and started it based on the instructions here : http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid My questions are: Where does my nginx configuration file go ? Within the src/ folder? After configuring Nginx, how do I start my Tornado project?

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • Setup a automatic server reboot on when a particular service fails

    - by user1179459
    I am running linux based server (centos 6.0) with cpanle and WHM, I have critical website running with a chat server which uses a openfire as the chat server backend server, i have monitored last few weeks this service crashes quite often, i have no way of knowing that, and i have to wait till the next day to restart the server. (and this can only be fixed by using server reboot as its got to do with some java memory problem) is there a way i can setup a monitoring service to the server and if this service goes down server itself will reboot ? is this something possible or is there a better way to overcome this problem ?

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

  • Employee Tracking: Is there a similar software to Elance WorkView or oDesk "Team Room"?

    - by Kunal
    We are looking for a great commercial or free tool which can monitor all our remote employees and keep the reports centrally. We need it similar to Elance WorkView or oDesk "Team Room", what these tools do is: Take screenshots at random interval. Track the activity on computer based on key strokes. (not a necessity) It doesn't necessarily need to track time but will be good to have, our aim is to monitor employees and make sure they're working - that's all. I'll give oDesk Team Room 10/10 and I haven't been able to find such tool. Is there anyone who can suggest such tool? Thanks

    Read the article

  • Leveraging Code Across Platforms in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Update Errors in Xubuntu 12.10

    - by wil
    I updated by computer from 12.04 to 12.10 and after I finished updating when I turned on my computer I am unable to update my computer. I tried install a new copy of 13.04 but my cpu doesn't support pae. I have a IBM Thinkpad T42 with a 1.7 gigahertx Cpu. When updating through terminal This is the output. sudo apt-get upgrade: [sudo] password for wil: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-extra-3.5.0-34-generic : Depends: linux-image-3.5.0-34-generic but it is not installed linux-image-generic : Depends: linux-image-3.5.0-34-generic but it is not installed E: Unmet dependencies. Try using -f. sudo apt-get upgrade -f: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following NEW packages will be installed: linux-image-3.5.0-34-generic 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/11.8 MB of archives. After this operation, 25.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 191530 files and directories currently installed.) Unpacking linux-image-3.5.0-34-generic (from .../linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb) ... This kernel does not support a non-PAE CPU. dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) wil@wil-ThinkPad-T42:~/Desktop$

    Read the article

  • Reg: Putty SSH Access

    - by gourav
    Dear all, I have a Linux based virtual server recently purchased. I need to transfer the files from local computer to my virtual server... I tried downloading the Putty. But there were no EXE files to install. i am using Windows XP at home. If possible, can i have to installer link. Do we need to know Linux compulsorily for using this Putty. And also is there any other tool which can be used by users who dont know linux commands. Please help.

    Read the article

  • Large enterprise application - clients wish to use duplicate e-mails addresses?

    - by Alex Key
    I'd like to know people's opinions, reactions to clients and technical work arounds (if applicable), to the issue of an enterprise application where a client wishes to use duplicate e-mail addresses? To clarify, when I say duplicate e-mail addresses I mean within the same client system, having multiple users that have the same e-mail address. So not just using generic e-mail addresses but using the e-mail address of another user. e.g. Bob Jenkins: [email protected] James Jeffery: [email protected] Context To give this some further context, in the e-learning sector it is common that although all staff in an organisation must complete e-learning - they may not have their own e-mail address so they choose to use their managers e-mail address. Albeit against good practice in public sites... it's a requirement we've over and over again where an organisation is split between office based staff and perhaps e.g. staff in a warehouse. Where problem lies Mr Steak, good point, the problem lies in password resets and perhaps in situations where semi-personal information could be sent (not confidential enough to worry about the insecurities of email). Perhaps reminders for specific system actions, which would be confusing for the unintended party to see (if perhaps misreading the e-mail's intended recipient) Possible solutions System knowing the difference between a "for the attention of" and direct to the person e-mails, including this in the body text. Using alternative communication such as SMS Simply not having e-mails sent to people who are not the intended recipient. Providing an e-mail service ourselfs (not really viable for a corporate IT dept) Thoughts?

    Read the article

  • MySQL Windows vs. Linux: performance, caveats, pros and cons?

    - by gravyface
    Looking for (preferrably) some hard data or at least some experienced anecdotal responses with regards to hosting a MySQL database (roughly 5k transactions a day, 60-70% more reads than writes, < 100k of data per transaction i.e. no large binary objects like images, etc.) on Windows 2003/2008 vs. a Debian-based derivative (Ubuntu/Debian, etc.). This server will function only as a database server with a separate Web server on another physical box; this server will require remote access for management (SSH for Linux, RDP for Windows). I suspect that the Linux kernel/OS will compete less than the Windows Server for resources, but for this I can't be certain. There's also security footprint: even with Windows 2008, I'm thinking that the Linux box can be locked down more easily than the Windows Server. Anyone have any experience with both configurations?

    Read the article

  • Oracle Unveils AutoVue Release 20.1

    - by prasenjit.niyogi(at)oracle.com
    We are extremely pleased to announce the availability of Oracle's AutoVue Release 20.1. AutoVue 20.1 is the latest major release of the family of Enterprise Visualization solutions from Oracle. Highlights of the release include: Unparalleled new format support and enhancements for 3D CAD, 2D, CAD, ECAD and PDF documents New capabilities that support end-to-end design to manufacture processes in the Electronics & High Tech space, that allow manufacturing engineers to perform accurate manufacturability reviews through better support for variants, overlays and polarity Significant printing enhancements, such as printing of markup notes; support for Excel file print settings; and print in grayscale; which serve to optimize paper-based business processes Powerful integration enablement capabilities to extend visualization into existing enterprise architectures and systems; including AutoVue Hotspots that enable visual navigation and action by linking visual data to structured enterprise data, and new AutoVue Document Print Services (DPS) to enrich enterprise applications with format and platform agnostic printing of any document type Improvements for cost-effective AutoVue deployment and administration, including support for virtualization Release 20.1 Webcast - Attend the webcast on April 13th at 12:00 pm EST to discover what is new and exciting in the latest release. Encourage your customers, prospects, and partners to attend. Title: Oracle Unveils AutoVue Release 20.1 Channel: Oracle AutoVue Channel Register Here: http://www.brighttalk.com/webcast/26282 To discover more about the latest release, and to find out what the customers and partners are saying about the value of this offering, check out the: What's New is AutoVue 20.1 Datasheet You can also learn all about the latest format support here AutoVue 20.1 Format Support Sheet We look forward to seeing you at the webcast. If you have any questions feel free to ask, and we will answer it in this forum. Enjoy AutoVue 20.1!

    Read the article

  • KVM to Xen migration

    - by qweet
    I've recently been appointed to create some VMs for production use, and went gung ho into making a KVM based VM instead of finding out what our production server uses. I've only recently found out though that our own servers use Xensource OS, and don't look like they're going to be upgraded in the near future. So for the moment, I'm stuck with either two choices- attempting to convert the KVM VM into a Xen VM, or rebuilding what I have into a new Xen VM. Being the lazy person I am, I would rather not have to rebuild the VM. I've looked for some documentation on a procedure to do this, but the only thing I can come up with is an ancient article with some vague instructions. So this is my question, Server Fault- can one migrate a KVM running on a KVM kernel to a Xen kernel? And if so, how?

    Read the article

  • Encrypt EC2 API call

    - by Frank
    I have to host an AMI in the Amazon Marketplace. i need to get the type of instance, whenever some user launches the AMI., like if its small medium or large. based on that i need to make some changes in the AMI when its created. I can do this with Amazon API call, to get the instance type, but the problem is that the instances created with the AMI will be started by other users, and i cannot use my AWS Credentials in the Amazon API. Is there any way that i can create an anonymous readonly user to make only specific type of EC2 API Calls? Or can i encrypt my EC2 API credentials, so no one can use it?

    Read the article

  • Speaking in Sweden in April

    - by Fabrice Marguerie
    As this is my first post in 2011, I wish you all the best for this new year. I don't know for you, but it will be a year of changes for me. I'll tell you more about that when the time comes. For the time being, I just wanted to announce my next speaking engagement: I'll be presenting at SDC2011, the Scandinavian Developer Conference. This event will take place on April 4 and 5 in Göteborg, Sweden. My session will be based on my article about memory leaks: How to detect and avoid memory and resources leaks in .NET applicationsDuring this session we will analyze the most common leak sources in .NET applications and we will see how to detect them in practice. We will demonstrate with live examples several tools that can help you to troubleshoot your applications. The goal of this session is to help you fix and avoid leaks in your .NET applications, whether they are built with Windows Forms, WPF, Silverlight, ASP.NET, or as console or Windows services. You will also get advices on how to improve your applications so they become less greedy. I hope to see you there if you're around in April :-)

    Read the article

  • FTP Synchronization software for Mac or PC

    - by evanmcd
    Hi, I've been using FTP Synchronizer for awhile and generally have had pretty good results with it. But, I've just moved to a Mac full-time (at work as well as home now) so want to get a native client if I can. I've tried the only one that I've found - SuperFlexibleSynchronizer - but it crashed every time I loaded up an FTP to FTP synch attempt. The most important features to me are: 1) ability to synch with a large number of files (thousands), as I generally work on sites with large number of files. 2) FTP to FTP synch. This would be very helpful as I work with some CMS based sites for which users upload files while on staging and don't want to move files locally first before moving live. Thanks! Evan

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • Conditional formatting

    - by djerry
    I have a rather annoying format I need to implement. There are 2 cells, both containing a date (cell A and B). B contains a date (24-06-2011). Cell B should be colored if cell A is between a date range, based on cell B. So if cell A lies between cell B - 7 days and cell B - 11 days, then it should be triggered. So with numbers: If B is 24-06-2011 then the range (which is not in any cell in the spreadsheet) is 13-06-2011 until 17-06-2011. If the date in cell A (let's say 14-06-2011) is in that range, cell B should be colored. Any ideas?

    Read the article

  • PRNG test suite: bitstream and stream length

    - by Martin Trigaux
    On the NIST website, there is a tool called sts (Statistical Test Suite) that allow us to rest the validity of a pseudo-random number generator based on a stream of bits in input. When running the program, there is two variables I am not sure to understand : the stream length and number of bitstream. Is the stream length the size of the file ? The number of bit inside ? The size of a bitstream ? Are the bitstreams subset of the whole file ? Chosen how ? Let say I have a text file containing 1,000,000 bits in ascii. What should be my arguments ? You can find the user manual here if needed (I didn't find explanation about what are these variables in it). Thank you

    Read the article

  • CMD Command to create folder for each file and move file into folder

    - by Tom
    I need a command that can be run from the command line to create a folder for each file (based on the file-name) in a directory and then move the file into the newly created folders. Example : Starting Folder: Dog.jpg Cat.jpg The following command works great at creating a folder for each filename in the current working directory. for %i in (*) do md "%~ni" Result Folder: \Dog\ \Cat\ Dog.jpg Cat.jpg I need to take this one step further and move the file into the folder. What I want to achieve is: \Dog\Dog.jpg \Cat\Cat.jpg Can someone help me with one command to do all of this?

    Read the article

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

  • how to take a backup of ubuntu system as it is?

    - by rajat
    I have installed ubuntu(dual boot) using wubi(Windows-based UBuntu Installer) installer for windows , and since then i am working in linux now it has many projects with many dependencies now i want to install the same ubuntu to other machines ,so that i don't need to install Ubuntu first and then install each and every project and it's dependencies . There is a folder called ubuntu in my windows driver which was created by wubi and which has all the ubuntu stuff . PS: Other machines have only windows 7 installed and have same configuration . Is there any way to install the save ubuntu i am using to the other machines ?

    Read the article

  • How can I mark a pixel in the stencil buffer?

    - by János Turánszki
    I never used the stencil buffer for anything until now, but I want to change this. I have an idea of how it should work: the gpu discards or keeps rasterized pixels before the pixel shader based on the stencil buffer value on the given position and some stencil operation. What I don't know is how would I mark a pixel in the stencil buffer with a specific value. For example I draw my scene and want to mark everything which is drawn with a specific material (this material could be looked up from a texture so ideally I should mark the pixel in the pixel shader), so that later when I do some post processing on my scene I would only do it on the marked pixels. I didn't find anything on the internet besides how to set up a stencil buffer and explaining the different stencil operations. I was expecting to find some System-Value semantics like SV_Depth to write to in the pixel shader (because the stencil buffer shares the same resource with the depth buffer in D3D11), but there is no such thing on MSDN. So how should I do this? If I am misunderstanding something please help me clear that up.

    Read the article

  • Introducing Code Map for Visual Studio 2012 September CTP

    - by krislankford
    As part of the Visual Studio 2012 CTP for September, Visual Studio got a little sexier at helping you discover and visualize your code. The introduction of the Code Map feature helps compliment the variety of other tools that are included with Visual Studio to help you analyze and visualize your projects and solutions. Code Map leverages the dgml format within Visual Studio that is currently used b the Architecture and Modeling tools. This is a nice addition that gets us from point A to point B a little faster. The great thing about Code Map is that you can gain access to the functionality from directly within your code from the context menu. This Code Map functionality is also context specific based on your cursor. You can evaluate and add items such as methods and variables directly to the Code Map window. As you add items the Code Map surface is updated to show your new item plus any relationships and dependencies that have been introduced in your code. Something that is also very nice is that the Code Map surface is interactive and allows you to use the F12 button (Go To Definition) which can help you navigate your code especially is you are adding items that span multiple files or projects. To get started all you have to do is go out and download the September CTP for Visual Studio 2012 located here. Happy Coding!   Code Map Window

    Read the article

  • Automated testing tool development challenges (for embedded software)

    - by Karthi prime
    My boss want to come up with the proposal for the following tool: An IDE: Able to build, compile, debug, via JTAG programming for the micro-controller. A Test Suite, reads the code in the IDE, auto generates the test cases, and it gives the in-target unit testing results(which is done by controlling code execution in the micro-controller via IDE). A no-overhead code coverage tool which interacts with the test suite and IDE. My work is to obtain the high level architecture of this tool, so as to proceed further. My current knowledge: There are tool-chains available from the chip manufacturer for the micro-controllers which can be utilized along with an open-source IDE like Eclipse, and along with an open-source burner, a complete IDE for a micro-controller can be done. Test cases can be auto-generated by reading the source file through the process of parsing, scripting, based on keywords. Test suite must be able to command the IDE to control, through breakpoints, and read the register contents from the microcontroller - This enables the in-target unit testing. An no-overhead code coverage should be done by no-overhead code instrumentation so as to execute those in the resource constraint environment of the micro-controller. I have the following questions: Any advice on the validity of my understanding? What are the challenges I will have during the development? What are the helpful open-source tools regarding this? What is the development time for this software? Thanks

    Read the article

  • How to migrate an SQLServer 2000 database from one machine to another

    - by Saiyine
    This January I'm migrating our main SQLServer 2000 based database to a beefier server. Is there any standard procedure or documentation on how to do it? I need to replicate all at the new server (databases, jobs, DTSs, vinculated servers, etc). Edit: I mean SQLServer 2000 on both ends! Edit: Be calm, people, I just crossed the versions from another software I posted about at the same time as this. Effectively, I even checked the wikipedia to be sure version 8 was 2000. Don't need to flame that much about what is just an errata.

    Read the article

< Previous Page | 783 784 785 786 787 788 789 790 791 792 793 794  | Next Page >