Search Results

Search found 16324 results on 653 pages for 'per thread'.

Page 380/653 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • Back to Sony

    - by Bunch
    Well I switched back to Sony. After about a year of debating whether or not to keep my XBox 360 or get a PS3 I decided over the weekend to trade in the 360 for a PS3.  I had thought about keeping both but I really don’t need two gaming systems. So far I like it, the graphics are good and the game selection is pretty much the same. For me the game exclusives didn’t sway me one way or the other (i.e. I’ve never played Halo so you can’t miss what you never played). My main reasons for switching were: RROD – I’ve had three and I don’t play a huge amount per week. Free online gaming – I never did buy a Live Gold account even though it is affordable Blu-ray player – Figured this is as good of a time as any to finally get one Netflix streaming with no need for an upgrade to your online account like on XBox MUCH quieter system Finally at a $299 price point All in all the last point was the main one for me. Like a lot of other folks I was really put off by the PS3’s original pricing of $499 and $599. Technorati Tags: Gaming

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • Defining formula through user interface in user form

    - by BriskLabs Pakistan
    I am a student and developing a simple assignment - windows form application in visual studio 2010. The application is suppose to construct formulas as per user requirement. The process: It has to pick data from columns of Microsoft Access database and the user should be able to pick the data by column name like we do in a drop down menu. and create reusable formulas in it ( configure it once and can change it again). followings are column titles from database that can be picked for example. e.g Col -1 : Marks in Maths Col -2 : Total Marks in Maths Col -3 : Marks in science Col -4 : Total marks in science Finally we should be able to construct any formula in the UI like (Col 1 + Col 3 ) / ( col 2 + col 4) = Formula 1 once this is formula is set saved and a name is assigned to it by user. he/she can use the formula and results shall appear in a window below. i.e He would be able to calculate his desired figures (formula) by only manipulating underlying data on the UI layer....choose the data for a period and apply the formula and get the answer Problem: It looks like I have to create an app where rules are set through UI....... this means no stored procedures are required in SQL.... please suggest the right approach.

    Read the article

  • TransformXml Task locks config file identified in Source attribute

    - by alexhildyard
    As background: the TransformXml MSBuild task is typically invoked in a custom build step to mark up a web.config file with per-environment configuration; its flexible directives offer highly granular control over the insertion, removal, substitution and transformation of existing configuration hierarchies. For those using the TransformXML task (typically in a Web Deployment Project) I raised an issue against Visual Studio 2010, in which the file handle on the input file was not released, meaning that following transformation the source file remained locked. As a result, the best way to transform a file was first to rename it, transform it, and then copy it back, leaving the "locked" file to be freed up later.I just heard today that this has now been resolved in Visual Studio 2012 RTM. That's good news, because Web Config Transformations offer a lot. An intelligent, automated build process will swap in the relevant transform(s), making it much easier to synthesise the Developer and Build server builds. This makes for a simpler and more exemplary build process, and with the tighter coupling comes a correspondingly quicker response to Developmental change.Oh, and don't forget -- it isn't just web.configs you can transform. You can transform app.configs, or indeed any XML file that honours the task's schema and hierarchical rules.

    Read the article

  • configure open_basedir under Plesk

    - by cori
    This might be a question for ServerFault, and if it wasn't for the Plesk aspect I would ask it there to start with, so if it's better suited for over there let me know and I'll move it. I'm working on a dedicated server set up as a reseller account with Plesk to manage the domains and server configuration, and I need to add a directory to the local open_basedir configuration for a specific vhost. Given Plesk's normal methodology, I expected to be able to go to /var/www/vhost/{%DOMAINNAME%}/conf and modify vhost.conf and place a new value there, as I have successfully done with other configuration settings for this domain (turning safe_mode off, for instance). When I do so, however, the new setting doesn't take (per phpinfo();). If I edit httpd.conf (which the plesk configuration specifically says not to do in the notes at the top of httpd.conf) the setting takes. Is there something specific about the open_basdir setting that makes it not configurable in vhost.conf? How much trouble am I letting myself in for by editing the vhost-specific httpd.conf (I imagine is someone makes changes in the plesk web interface it might be overwritten, but what other risk is there)? Thanks!

    Read the article

  • Confusion with floats converted into ints during collision detection

    - by TheBroodian
    So in designing a 2D platformer, I decided that I should be using a Vector2 to track the world location of my world objects to retain some sub-pixel precision for slow-moving objects and other such subtle nuances, yet representing their bodies with Rectangles, because as far as collision detection and resolution is concerned, I don't need sub-pixel precision. I thought that the following line of thought would work smoothly... Vector2 wrldLocation; Point WorldLocation; Rectangle collisionRectangle; public void Update(GameTime gameTime) { Vector2 moveAmount = velocity * (float)gameTime.ElapsedGameTime.TotalSeconds wrldLocation += moveAmount; WorldLocation = new Point((int)wrldLocation.X, (int)wrldLocation.Y); collisionRectangle = new Rectangle(WorldLocation.X, WorldLocation.Y, genericWidth, genericHeight); } and I guess in theory it sort of works, until I try to use it in conjunction with my collision detection, which works by using Rectangle.Offset() to project where collisionRectangle would supposedly end up after applying moveAmount to it, and if a collision is found, finding the intersection and subtracting the difference between the two intersecting sides to the given moveAmount, which would theoretically give a corrected moveAmount to apply to the object's world location that would prevent it from passing through walls and such. The issue here is that Rectangle.Offset() only accepts ints, and so I'm not really receiving an accurate adjustment to moveAmount for a Vector2. If I leave out wrldLocation from my previous example, and just use WorldLocation to keep track of my object's location, everything works smoothly, but then obviously if my object is being given velocities less than 1 pixel per update, then the velocity value may as well be 0, which I feel further down the line I may regret. Does anybody have any suggestions about how I might go about resolving this?

    Read the article

  • Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

  • Installing 13.04 on an EFI partition - Share with Windows 8?

    - by mengelkoch
    Information I've found here suggests that for my system, I need to install 13.04 into an EFI-type partition, since it needs to boot as UEFI. I also understand it is advisable to have only ONE EFI partition on the disk; I've read here that it is OK for Ubuntu and Windows to share the same partition (please confirm). When I try to install into the existing EFI drive, I get the message "No root file system is defined. Please correct from partitioning menu." Do I change the EFI boot partition to another type? Doesn't that defeat the purpose? If I change it to Ext4 Journaling File System, I am given the opportunity to define the '/' Mount point. I haven't proceeded beyond this point for fear I am going to destroy Windows 8 by altering this partition. BTW, I created three partitions in Windows before installing, per the helpful response to my previous question. But if I try to install into the partition I created for Ubuntu, I get the "No root file system..." error again.

    Read the article

  • Genetic Algorithm new generation exponentially increasing

    - by Rdz
    I'm programming Genetic Algorithm in C++ and after searching all kind of ways of doing GA'a operators (selection, crossover, mutation) I came up with a doubt. Let's say I have an initial population of 500. My selection will consist in getting the top 20% of 500(based on best fitness). So I get 100 individuals to mate. When I do the crossover I'll get 2 children where both together have 50% of surviving. So far so good. I start the mutation, and everything's ok.. Now when I start choosing the Next generation, I see that I have a big number of children (in this case, 4950 if you wanna know). Now the thing is, every time I run GA, if I send all the children to the next generation, the number of individuals per generation will increase exponentially. So there must be a way of choosing the children to fulfill a new generation without getting out of this range of the initial population. What I'm asking here is if there is anyway of choosing the children to fill the new generations OR should I choose somehow (and maybe reduce) the parents to mate so I don't get so many children in the end. Thanks :)

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

  • Suggestions for a Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • A fix for the design time error in MVVM Light V4.1

    - by Laurent Bugnion
    For those of you who installed V4.1 of MVVM Light and created a project for Windows Phone 8, you will have noticed an error showing up in the design surface (either in Visual Studio designer, or in Expression Blend). The error says: “Could not load type ‘System.ComponentModel.INotifyPropertyChanging’ from assembly ‘mscorlib.extensions’” with additional information about version numbers. The error is caused by an incompatibility between versions of System.Windows.Interactivity. Because this assembly is strongly named, any version incompatibility is causing the kind of error shown here (for an interesting discussion on the strong naming issue, see this thread on Codeplex). I managed to resolve the issue for Windows Phone 8 and will publish a cleaned up installer next week. In the mean time, in order to allow you to continue development, please follow the steps: Download the new DLLs zip package (MVVMLight_V4_1_25_WP8). Right click on the Zip file and select Properties from the context menu. Press the “Unblock” button (if available) and then OK. Right click again on the zip package and select “Extract all…”. Select a known location for the new DLLs. Open the MVVM Light project with the design time error in Visual Studio 2012. Open the References folder in the Solution Explorer. Select the following DLLs: GalaSoft.MvvmLight.dll, GalaSoft.MvvmLight.Extras.dll, Microsoft.Practices.ServiceLocation.dll and System.Windows.Interactivity.dll. Press “delete” and confirm to remove the DLLs from your project. Right click on References and select Add Reference from the context menu. Browse to the folder with the new DLLs. Select the four new DLLs and press OK. Rebuild your application, and open it again in Blend or in the Visual Studio designer. The error should be gone now. In the next few days, as time allows, I will publish a new MSI containing a fixed version of the DLLs as well as a few other improvements. This quick fix should however allow you to continue working on your Windows Phone 8 projects in design mode too.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How to collaborate on features using github

    - by Robert Dailey
    github encourages 1 fork per user, so that that user can work independently on a feature and then request that feature to be accepted into the main repository via pull request. However, what if 2 developers need to collaborate on that feature? What is the ideal workflow for this? I could see a number of options: Both developers fork the original repository. Each developer pulls/pushes changes between each other's repository. This seems like a lot of work (tiny micro operations) and also creates a delay between changes, so increases the window for conflicts. Developer 1 forks from the main repository, developer 2 forks from developer 1. Same as #1 mainly but hopefully simplifies Developer 2's life a little? Developer 1 gives Developer 2 permissions to his own fork, so they both work out of the same central repository. Not sure if this is ideal. I'm also curious where branches come into this. Obviously there would be a branch for the feature itself but that branch can't exist in a single place, it would have to exist on multiple forks and be synchronized. Basically just really confused about this workflow, would like an approach for how this can be best accomplished.

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • "Can't open display" even after access with xhost

    - by Yann
    I'm trying to run a graphical program remotely, without using ssh. I've set the display variable on the server (let's say server.com, Linux, not ubuntu, and no su rights) to point to my workstation (workstation.com, ubuntu 10.04) setenv DISPLAY workstation.com:0 Then on my workstation I've tried both xhost +server.com and xhost + Then I ssh into the server (to test things): ssh [email protected] and try to run xclock, and get the following error: Error: Can't open display: workstation.com:0 I've looked at /etc/ssh/ssh_config on the workstation and I should be forwarding correctly: X11Forwarding yes. How do I go about troubleshooting this? What logs on the workstation document these failed attempts? To explain why I'm doing this: I want to run a batch job on a server to debug an MPI-based parallel program. I want to run xterm as the batch job executable, per the instructions provided by the system admins. This setup use to work. I reinstalled things on my workstation and since then I frequently get one-time message along the lines The authenticity of host 'hostname (XXX.XXX.XXX.XX)' can't be established. My attempt to fix the above was to move my ~/.ssh/known_hosts file to a back up on both server and host, and then to ssh from each to the other with the flag -o StrictHostKeyChecking=no. I no longer get that message, but I was wondering does this play a part in why X11 forwarding is not working?

    Read the article

  • Laptop runs HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu additions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • WebLogic 12c training in Dutch–May 10th & 11th 2012 Utrecht Netherlands

    - by JuergenKress
    Axis into ICT offers you the opportunity to increase your skills. We are organizing ‘Bring Your Own Laptop Knowledge Sessions‘. In a small group of up to 8 people we are going discuss all the practical aspects of WebLogic Server you ever wanted to know. This is not a standard course, but a training where applying the material in practice is of importance. All participants will receive their own virtual machine, which offers you to ability to continue afterwards with your own practice environment. By keeping the groups small we create an informal atmosphere with plenty of room for all your questions or to even discuss your specific situation. The approach is highly interactive; after all you are attending to increase your knowledge. Topics that will be covered Introduction JVM Tuning Deployment Diagnostic Framework Class Loading Security Configure Resources Clustering Scripting Register for this session You are interested in the ‘Bring Your Own Laptop Knowledge Sessions: WebLogic 12c’? Register for one of two possibilities by using the form below. After registration you will receive a confirmation by e-mail. Training will be in Dutch! Date: May 10 & 11 2012 from 09:30 – 17:00 hrs Location: Axis into ICT Headquarters (Utrecht) Expenses: € 700,- per person (VAT excluded) For registration and details please visit our website. Want to promote your event? Let us know Twitter @wlscommunity! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Axis,education,WebLogic training,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress,WebLogic 12c

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • VCS strategy with TeamCity and CI

    - by Luke Puplett
    I'm planning a strategy which seeks to allow automated deployment of a website codebase into QA and production on check-in. We're using the fabulous TeamCity. We want to control release to live production; i.e. not have every check-in on Trunk go live. So my plan is to use Trunk as QA. Committing to Trunk triggers deployment to QA. I will then have a Production branch which also triggers deployment on commit, to the live site. The idea is simply that Trunk represents the mainline codebase but it hasn't gone live yet. We can branch features and do daily pulls from Trunk into those feature branches as per normal and merge/re-integrate into Trunk when we're happy for it to go to QA. When the BAs give the nod, we then smash a bottle of champagne and merge Trunk to Production and out she goes. I've never seen it done like this. Other greenfield CI strategies involve hiding features and code from production via config - this codebase can't cope with that - or just having CI on QA and taking cuts and manually pushing to live. Does my plan sound alright?

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Charging by the hour/project

    - by thesam18888
    This is related to a question I asked earlier - How to end a relationship with a client without pissing them off? What are your obligations when charging by the hour vs charging by project? If you agree to take on a project, give a rough estimate that it might take 10 days for you to work on and charge £X per hour - are you obligated to work for free after those 10 days are up and you have still not managed to complete your project due to unanticipated issues? What if you have delivered the project but bugs are found - should you fix these bugs for free if the 10 days are up or should you charge your client? Also, for the above project, what should be the result when you start on the project, but after the 10 days for whatever reason you have to give up and tell your client that you cannot do it anymore? I realise that this does nothing to build your reputation and relationship with the client but are you obligated to pay back the money paid to you or do you just deliver the half/nearly completed source code and help them find someone else to complete it? The reason I am asking the above questions is because I am very new to freelancing and would like to know how to deal with the above situations if they ever crop up. Thanks!

    Read the article

  • How do i approach this collision model?

    - by PeeS
    this is the game level prototype i have already implemented. It has few objects per room to allow me to finally add some collision detection/response code into it. VIDEO As you can probably see, every object inside has it's own AABB, even the room itself has AABB. So a player is like 'inside the Room AABB'. My player will be exactly inside the room, so he would have to collide correctly with those AABBs, so that when he hits any of those objects inside he get's a proper collision response from those AABB's. Now i would like to hear from you what kind of collision approach should i choose in here? How do i approach this kind of stuff: AABB to AABB collision detection then when this is positive go with AABB - Tri to find proper plane normal and calculate response ? AABB to AABB then when positive go with AABB - AABB Side check to find proper proper plane normal and calculate response? Anything else? How do you do this ? Many thanks.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >