Search Results

Search found 33736 results on 1350 pages for 'project structure'.

Page 916/1350 | < Previous Page | 912 913 914 915 916 917 918 919 920 921 922 923  | Next Page >

  • git-receive-pack : command not found.

    - by Philippe Mongeau
    I made a git repo on a local machine with "git init --bare" and added it as the remote origin on the project on my main computer with ssh: git add remote origin [email protected]:repoName.git I was able to make a commit and push from my main computer to the other computer the day I created the repo, but today i tried and it didn't work. When I did "git push origin" it returned this error: bash: line 1: git-receive-pack: command not found fatal: The remote end hung up unexpectedly The two machines are mac the main one running Leopard and the server one running Tiger. I think it may be realted to the $PATH of git on the server but I'm not sure. i used theses instrution to create my git server: http://blog.commonthread.com/2008/4/14/setting-up-a-git-server

    Read the article

  • How can I make my PHP development environment more efficient?

    - by pixel
    I want to start a home-brew pet project in PHP. I've spent some time in my life developing in PHP and I've always felt it was hard to organize the development environment efficiently. In my previous PHP work, I've used a windows desktop machine and a linux server for development. This configuration had it's advantages: it's easy to configure Apache (and it's modules)/PHP/MySql on a linux box, and, at the time, this configuration was the same like on production server. However, I never successfully set up a debug connection between my Eclipse install and X-debug on server. Transferring files from my local workspace to the server was also very annoying (either ftp or Bazaar script moving files from repository to web root). For my new setup, I'm considering installing everything on my local machine. I'm afraid that it will slow down workstation performance (LAMP + Eclipse), and that compatibility problems will kick-in. What would you recommend? Should I develop using two separate machines? On one? Do you have experience using one of above configurations in your work?

    Read the article

  • Where to ask a question about startups?

    - by Wolfpack'08
    I've got some questions about how to better run my web application development business, which has only been running for a little more than two years. It's still in its early phases, as I consider the first five years the 'early years'. Being inexperienced with business in general, I always have a lot of questions as to whether I am making the right decisions (for example, I often worry about my hiring practices, and I often worry that I may have priced new products wrongly). Is there a good site on the Stack Exchange to ask questions about things like this (for example, this site, the Project Management site, the Salesforce site, or perhaps the Personal Finance site)? I'm combing through questions and answers on each of these sites, now, and I can see questions that mimic my own. Nothing precisely the same, but things that are similar on all sites. Apart from just reading through previously asked questions, what is a good way to get a sense of whether or not my question fits on a site in the exchange? If you recommend going out of the exchange, please also let me know.

    Read the article

  • What's the strengths and weaknesses of existing configuration management systems?

    - by Daniel C. Sobral
    I was looking up here for some comparisons between CFEngine, Puppet, Chef, bcfg2, AutomateIt and whatever other configuration management systems might be out there, and was very surprised I could find very little here on Server Fault. For instance, I only knew of the first three links above -- the other two I found on a related google search. So, I'm not interested in what people think is the best one, or which they like. I'd like to know the following: Configuration Management System's name. Why it was created (as opposed to using an existing solution). Relative strengths. Relative weaknesses. License. Link to project and examples.

    Read the article

  • Oracle OpenWorld Recap - A Walk in the Clouds (and heat in San Francisco)!

    - by Di Seghposs
    Whether you were one of the 50,000 attendees in San Francisco or one of the million+ online attendees – we’d like to thank you for joining us at Oracle OpenWorld last week! With temperatures in the 80s and 90s, attendees traveled the overheated streets to join packed keynotes and general sessions – all to find the information they came in search of – Oracle solutions to address their business requirements and challenges. The buzz of this year’s OpenWorld was all about ‘The Cloud’. And, the financial management team joined in the cloud buzz with Thomas Kurian’s keynote which highlighted our ERP Cloud Service as the most complete cloud service on the market. Offering the full breadth of business operations, including Financial Management, Risk and Control Management, Project Portfolio Management, Procurement, Sourcing, and Inventory Management, Oracle ERP Cloud Service transforms the back office into a collaborative, efficient, and intuitive hub. And, our product marketing expert on Financial Management, Annette Melatti, provided a glimpse of what the office of finance looks like in the 21st century as well as shared what’s next for Oracle’s financial solutions discussing the future of Financial Management with Fusion Financials, E-Business Suite, PeopleSoft and the JD Edwards solutions. There were over 120 sessions from customers, partners, and Oracle experts that addressed financial management solutions along with demo pods and Meet the Experts sessions. We hope you found what you were looking for! Missed any of the keynotes or general sessions? Watch them on demand here. At OpenWorld, we also announced that Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle ERP Cloud Service to help improve decision-making, implement robust reporting, and take advantage of the cost savings provided by the cloud. The CFO of Lending Club, Carrie Dolan had mentioned that they “are an innovative, data-intensive, high-growth company and needed a solution and partner that could match us. We conducted a thorough review of our options, and Oracle ERP Cloud Service was the clear winner in terms of capabilities and business value as well as commitment to us as a customer.” Read the entire release here. For now, it’s back to business as we gear up for the second half of our fiscal year and start planning for Oracle OpenWorld 2013!

    Read the article

  • why are transaction monitors on decline? or are they?

    - by mrkafk
    http://www.itjobswatch.co.uk/jobs/uk/cics.do http://www.itjobswatch.co.uk/jobs/uk/tuxedo.do Look at the demand for programmers (% of job ads that the keyword appears), first graph under the table. It seems like demand for CICS, Tuxedo has fallen from 2.5%/1% respectively to almost zero. To me, it seems bizarre: now we have more networked and internet enabled machines than ever before. And most of them are talking to some kind of database. So it would seem that use of products whose developers spent last 20-30 years working on distributing and coordinating and optimizing transactions should be on the rise. And it appears they're not. I can see a few causes but can't tell whether they are true: we forgot that concurrency and distribution are really hard, and redoing it all by ourselves, in Java, badly. Erlang killed them all. Projects nowadays have changed character, like most business software has already been built and we're all doing internet services, using stuff like Node.js, Erlang, Haskell. (I've used RabbitMQ which is written in Erlang, "but it was small specialized side project" kind of thing). BigData is the emphasis now and BigData doesn't need transactions very much (?). None of those explanations seem particularly convincing to me, which is why I'm looking for better one. Anyone?

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Data Access Objects old fashioned? [on hold]

    - by Bono
    A couple of weeks ago I delivered some work for a university project. After a code review with some teachers I got some snarky remarks about the fact that I was (still) using Data Access Objects. The teacher in question who said this mentions the use of DAO's in his classes and always says something along the lines of "Back then we always used DAO's". He's a big fan of Object Relational Mapping, which I also think is a great tool. When I was talking about this with some of my fellow students, they also mentioned that they prefer the use of ORM, which I can understand. It did make me wonder though, is using DAO's really so old fashioned? I know that at my work DAO's are still being used, but this is due to the fact that some of the code is rather old and therefor can't be coupled with ORM. We also do use ORM at my work. Trying to find some more information on Google or Stack Exchange sites didn't really enlighten me. Should I step away from the use of DAO's and only start implementing ORM? I just feel that ORM's can be a bit overkill for some simple projects. I'd love to hear your opinions (or facts) about this.

    Read the article

  • Write hash password to LDAP when creating a new user

    - by alibaba
    I am working on a project with a central user database system. One of the requirements of the system is that there should be only one set of users for all the application. FreeRADIUS and Samba are two my applications that both use LDAP as their backend. Since users must be the same for the entire system that contains many other applications, I have to read the list of users from the central database and recreate them in the LDAP directories for Samba and FreeRADIUS. The problem is that users are sent to me from another entity and I can save them in the database with their hash passwords. I don't have access to their cleartext passwords. I am wondering if I could enter directly a hash password for a new user in LDAP with my preferred hash mechanism. If not, can any one tell me what strategy I have to use? I am running my server on UBUNTU 12.04 and all other applications are the latest versions. My database system is PostgreSQL 9.2. Thank you

    Read the article

  • How are certain analytics metrics (time on site, etc.) usually distributed?

    - by a barking spider
    I'm not sure if I've come to the right place to ask this question, but I'm gathering some information for a research project. We're trying to design an experiment that'll heavily involve web analytics, and I'm trying to figure out some sensible values of mean +/- standard deviation for the following visitor-level (i.e., visitor 1 spends 2 minutes on site, visitor 2 spends 1 minute -- mean 1.5 +/- 0.71...) metrics: time spent on site page views If time allowed, we would put up the sites and gather the information ourselves, but we have a grant deadline coming up. I realize that even though these the distributions of these quantities are probably going to be heavily skewed towards zero, we'll need some reasonable figures or estimates of these figures in order to do sample size calculations, etc. Anyway, I'm not sure where else I'd turn, and I certainly have had a difficult time finding these values in the prior literature. If someone could direct me to a paper with the right information, or if you have these figures on hand (perhaps taken directly from your logs!) -- that would be amazing, and I'd love to hear from you. Thanks in advance, and even though I'm not allowed to reveal too much, rest assured that this info'll be applied towards a good cause :)

    Read the article

  • HTML Manifest for Content Folios

    - by Kyle Hatlestad
    I recently worked on a project to create a custom content folio renderer in WebCenter Content. It needed to output the native files in the folio along with a manifest file in HTML format which would list the contents of the folio along with any designated metadata and a relative link to the file within the download.  This way a person could hand someone the folio download and it would be a self-contained package with all of the content and a single file to display the information on the contents.  The default Zip rendition of the folio will output the web-viewable version of the file with an HDA formatted file for each one. And unless you are fluent in HDA or have a tool to read them, they are difficult to consume. I thought this might be useful for others, so I'm posting a copy of the component here. Beyond the standard instructions for installing a component, there is an environment configuration file (folionativezipwithmanifestrenderer_environment.cfg) which has a couple of options. FolioMetadataManifestList - This is a comma separated list of metadata fields (system or custom) that should be included in the manifest file. FolioMetadataManifestUseOriginalFilename - (True or False) If set to True, the filenames in the zip file will be based on the original filename as it was checked into WebCenter Content.  If False, it will use the 'Name' of the item as defined within the Folio.  This is usually the Title of the item. The component also includes the source code, so feel free to use this as a reference for creating other interesting folios. 

    Read the article

  • JSR 308 Moves Forward

    - by abuckley
    I am pleased to announce a number of recent milestones for JSR 308, Annotations on Java Types: Adoption of JCP 2.8 Thanks to the agreement of the Expert Group, JSR 308 operates under JCP 2.8 from September 2012. There is a publicly archived mailing list for EG members, and a companion list for anyone who wishes to follow EG traffic by email. There is also a "suggestion box" mailing list where anyone can send feedback to the E.G. directly. Feedback will be discussed on the main EG list. Co-spec lead Prof. Michael Ernst maintains an issue tracker and a document archive. Early-Access Builds of the Reference Implementation Oracle has published binaries for all platforms of JDK 8 with support for type annotations. Builds are generated from OpenJDK's type-annotations/type-annotations forest (notably the langtools repo). The forest is owned by the Type Annotations project. Integration with Enhanced Metadata On the enhanced metadata mailing list, Oracle has proposed support for repeating annotations in the Java language in Java SE 8. For completeness, it must be possible to repeat annotations on types, not only on declarations. The implementation of repeating annotations on declarations is already in the type-annotations/type-annotations forest (and hence in the early-access builds above) and work is underway to extend it to types.

    Read the article

  • Javascript storing data

    - by user985482
    Hi I am a beginner web developer and am trying to build the interface of a simple e-commerce site as a personal project.The site has multiple pages with checkboxes.When someone checks an element it retrives the price of the element and stores it in a variable.But when I go to the next page and click on new checkboxes products the variable automaticly resets to its original state.How can I save the value of that variable in Javascript? This is the code I've writen using sessionStorage but it still dosen't work when I move to next page the value is reseted. How can I wright this code so that i dosen't reset on each page change.All pages on my website use the same script. $(document).ready(function(){ var total = 0; $('input.check').click(function(){ if($(this).attr('checked')){ var check = parseInt($(this).parent().children('span').text().substr(1 , 3)); total+=check; sessionStorage.var_name=0 + total; alert(sessionStorage.var_name); }else{ var uncheck = parseInt($(this).parent().children('span').text().substr(1 , 3)); total-=uncheck; } })

    Read the article

  • Oracle Weblogic 12c for New Projects–Webcast November 7th 2013

    - by JuergenKress
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications. WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deploying enterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013   9am UTC/11am EET REGISTER NOW WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: education,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • How can I make my PHP development environment more efficient?

    - by pixel
    I want to start a home-brew pet project in PHP. I've spent some time in my life developing in PHP and I've always felt it was hard to organize the development environment efficiently. In my previous PHP work, I've used a windows desktop machine and a linux server for development. This configuration had it's advantages: it's easy to configure Apache (and it's modules)/PHP/MySql on a linux box, and, at the time, this configuration was the same like on production server. However, I never successfully set up a debug connection between my Eclipse install and X-debug on server. Transferring files from my local workspace to the server was also very annoying (either ftp or Bazaar script moving files from repository to web root). For my new setup, I'm considering installing everything on my local machine. I'm afraid that it will slow down workstation performance (LAMP + Eclipse), and that compatibility problems will kick-in. What would you recommend? Should I develop using two separate machines? On one? Do you have experience using one of above configurations in your work?

    Read the article

  • How to know how much detailed requirements should be?

    - by user1620696
    This doubt has to do with the requirements gathering phase of each iteration in one project based on agile methodologies. It arose because of the following situation: suppose I meet with my customer to gather the requirements and he says something like: "I need to be able to add, edit, remove and see details of my employees". That's fine, but how should we register this requirement? Should we simply write something like "the system must allow the user to manage employees", or should we be more specific writing for points The system must allow the user to add employees; The system must allow the user to see details of employees; The system must allow the user to edit employees; The system must allow the user to delete employees; Of course, this is just an example of a situation I was in doubt. The main point here is: how to know how much detailed I must be, and how to know what I should register? Are there strategies for dealing with these things? Thanks very much in advance!

    Read the article

  • ADF - Now with Robots!

    - by Duncan Mills
    I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on: So why so interesting? Well - you probably guessed from the title, ADF is involved. Indeed this is as about as far from the traditional ADF data entry application as you can get. Instead of a database at the back-end there's basically a robot. That's right, this remarkable tape drive is controlled through an ADF using all your usual friends of ADF Faces, Controller and Binding (but no ADFBC for obvious reasons). ADF is used both on the touch screen you see on the front of the device in the video, and also for the remote management console which provides a visual representation of the slots and drives. The latter uses ADF's Active Data Framework to provide a real-time view of what's going on the rack. . What's even more interesting (for the techno-geeks) is the fact that all of this is running out of flash storage on a ridiculously small form factor with tiny processor - I probably shouldn't reveal the actual specs but take my word for it, don't complain about the capabilities of your laptop ever again! This is a project that I've been personally involved in and I'm pumped to see such a good result and,  I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do). More info in the SL150 (should you feel the urge to own one) is here. 

    Read the article

  • Offer me an ASP.NET & a SQL Server 2008 server specifications for about 2000 concurrent users, please.

    - by amkh
    We have a web application project wich will be created using ASP.NET 4.0, Entity Framework, and SQL Sever 2008 R2. To meet the needs, suppose a normal page of this application that has a query which it takes 10 miliseconds to response on a Core2 Quad @ 2.8GHz proccessor with 2x2GB of DDR3 Ram (EntityFramework overheads are considered). And we will have about 2000 concurrent user at peek times. So, what is the best recommended specifications (CPU/RAM/RAID/...) for the server which will be host this application? -- Or -- How can I calculate that?

    Read the article

  • how to fix An error occurred during the processing of a configuration file required to service this request

    - by Alex
    Just created branda new MVC 4 project and instead of expected "hello world" got following error: ==================================================== Server Error in '/' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Default Role Provider could not be found. Source Error: Line 244: Line 245: Line 246: Line 247: Line 248: Source File: c:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Config\machine.config Line: 246 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.272 =============================================================== Any idea how to fix this? Thanks

    Read the article

  • How do you tell if advice from a senior developer is bad?

    - by learnjourney
    Recently, I started my first job as a junior developer and I have a more senior developer in charge of mentoring me in this small company. However, there are several times when he would give me advice on things that I just couldn't agree with (it goes against what I learned in several good books on the topic written by the experts, questions I asked on some Q&A sites also agree with me) and given our busy schedule, we probably have no time for long debates. So far, I have been trying to avoid the issue by listening to him, raising a counterpoint based on what I've learned as current good practices. He raises his original point again (most of the time he will say best practice, more maintainable but just didn't go further), I take a note (since he didn't raise a new point to counter my counterpoint), think about it and research at home, but don't make any changes (I'm still not convinced). But recently, he approached me yet again, saw my code and asked me why haven't I changed it to his suggestion. This is the 3rd time in 2--3 weeks. As a junior developer, I know that I should respect him, but at the same time I just can't agree with some of his advice. Yet I'm being pressured to make changes that I think will make the project worse. Of course as an inexperienced developer, I could be wrong and his way might be better, it may be 1 of those exception cases. My question is: what can I do to better judge if a senior developer's advice is good, bad or maybe it's (good but outdated in today context)? And if it is bad/outdated, what tactics can I use to not implement it his way despite his 'pressures' while maintaining the fact that I respect him as a senior?

    Read the article

  • Artists and music - Need Help Deciding on a CMS

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own smaller gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implemented all of these features, but I want to use an existing content management system for all of these features so that future developers can, hopefully, more easy extend the website. And also, so that I don't have to provide too much documentation. I have never developed a website using an external CMS like Drupal or Wordpress and after seeing hours of tutorial videos of both systems, I still can't make up my mind on whether i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can imagine that staff members would also want to create content using iPhone or android based mobile devices, but this is not a required feature. Can someone, with experience, please tell me about their experiences with larger projects like this? The site will have approximately 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • DB Object passing between classes singleton, static or other?

    - by Stephen
    So I'm designing a reporting system at work it's my first project written OOP and I'm stuck on the design choice for the DB class. Obviously I only want to create one instance of the DB class per-session/user and then pass it to each of the classes that need it. What I don't know it what's best practice for implementing this. Currently I have code like the following:- class db { private $user = 'USER'; private $pass = 'PASS'; private $tables = array( 'user','report', 'etc...'); function __construct(){ //SET UP CONNECTION AND TABLES } }; class report{ function __construct ($params = array(), $db, $user) { //Error checking/handling trimed //$db is the database object we created $this->db = $db; //$this->user is the user object for the logged in user $this->user = $user; $this->reportCreate(); } public function setPermission($permissionId = 1) { //Note the $this->db is this the best practise solution? $this->db->permission->find($permissionId) //Note the $this->user is this the best practise solution? $this->user->checkPermission(1) $data=array(); $this->db->reportpermission->insert($data) } };//end report I've been reading about using static classes and have just come across Singletons (though these appear to be passé already?) so what's current best practice for doing this?

    Read the article

  • Working with lots of cubes. Improving performance?

    - by Randomman159
    Edit: To sum the question up, I have a voxel based world (Minecraft style (Thanks Communist Duck)) which is suffering from poor performance. I am not positive on the source but would like any possible advice on how to get rid of it. I am working on a project where a world consists of a large quantity of cubes (I would give you a number, but it is user defined worlds). My test one is around (48 x 32 x 48) blocks. Basically these blocks don't do anything in themselves. They just sit there. They start being used when it comes to player interaction. I need to check what cubes the users mouse interacts with (mouse over, clicking, etc.), and for collision detecting as the player moves. Now I had a massive amount of lag at first, looping through every block. I have managed to decrease that lag, by looping through all the blocks, and finding which blocks are within a particular range of the character, and then only looping through those blocks for the collision detection, etc. However, I am still going at a depressing 2fps. Does anyone have any other ideas on how I could decrease this lag? Btw, I am using XNA (C#) and yes, it is 3d.

    Read the article

  • Terminal runs svn commands very slowly, how can I speed this up?

    - by Paul
    Spending all day in terminal is beginning to get frustrating. We're working with large CakePHP projects, including a ton of schema files and complex controllers. Whenever I go into a project, and enter svn up, or svn ci my system chokes. It takes a good 15-30 seconds before it returns what revision number I'm on. I'm running OSX 10.6 on a Macbook Pro. Any reasoning behind this? Anyway I could fix this speed issue?

    Read the article

< Previous Page | 912 913 914 915 916 917 918 919 920 921 922 923  | Next Page >