Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 597/1018 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • Question about a simple design problem

    - by Uri
    At work I stumbled uppon a method. It made a query, and returned a String based on the result of the query, such as de ID of a customer. If the query didn't return a single customer, it'd return a null. Otherwise, it'd return a String with the ID's of them. It looked like this: String error = getOwners(); if (error != null) { throw new Exception("Can't delete, the flat is owned by: " + error); } ... Ignoring the fact that getCustomers() returns a null when it should instead return an empty String, two things are happening here. It checks if the flat is owned by someone, and then returns them. I think a more readable logic would be to do this: if (isOwned) { throw new Exception("Can't delete, the flat is owned by: " + getOwners()); } ... The problem is that the first way does with one query what I do with two queries to the database. What would be a good solution involving good design and efficiency for this?

    Read the article

  • Why is my ethernet interface in promiscuous mode

    - by nhed
    I read that seeing a flag of M in netstat -i is the way to tell which of your interfaces is in promiscuous mode I run it and I see that eth1 is in promiscuous mode $ netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth1 1500 0 1770161198 0 0 0 57446481 0 0 0 BMRU lo 16436 0 97501566 0 0 0 97501566 0 0 0 LRU This seems to be the case on all the machines I checked (All Centos6.0, both virtual and physical), any idea why ethernet devices would be in such a mode unless someone was running any pcap based app (sudo lsof | grep pcap shows nothing)? I did not see any mention of promiscuous in any of the config files (sudo grep -r promis /etc) Any ideas what puts the interface into that mode and why? p.s. most of the posts I see seem to be security related, this is not that

    Read the article

  • How to tell if OpenGL is really working in Ubuntu 10.04

    - by Jonathan
    I have a lenovo S9e running Intel integrated graphics. Here is my lspci output related to the graphics: 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: Lenovo Device 3870 Flags: bus master, fast devsel, latency 0 Memory at f0580000 (32-bit, non-prefetchable) [size=512K] Capabilities: [d0] Power Management version 2 I want to know how I can make sure OpenGL support is running in full on an Ubuntu 10.04 installation. I have a few hints to think that it is not: The "Desktop Effects" will not load Apps such as stardock, when attempting to use OpenGL rendering, will display black boxes instead of transparency In the games Pioneers, the number-tile icons are suspiciously just black circles Windows games running with Wine will only support software rendering, not hardware rendering When I boot into a Knoppix LiveCD, the desktop effects do work, splendidly, meaning compiz detects my computer as capable. My problem with troubleshooting is that Canonical has basically eliminated the conf-file-based mechanism of X11 as far as I can tell, thus making it even harder to ensure graphics modules are loading properly. How do I debug and test OpenGL on m Ubuntu 10.04 installation?

    Read the article

  • What's the best way to manage error logging for exceptions?

    - by Peter Boughton
    Introduction If an error occurs on a website or system, it is of course useful to log it, and show the user a polite message with a reference code for the error. And if you have lots of systems, you don't want this information dotted around - it is good to have a single centralised place for it. At the simplest level, all that's needed is an incrementing id and a serialized dump of the error details. (And possibly the "centralised place" being an email inbox.) At the other end of the spectrum is perhaps a fully normalised database that also allows you to press a button and see a graph of errors per day, or identifying what the most common type of error on system X is, whether server A has more database connection errors than server B, and so on. What I'm referring to here is logging code-level errors/exceptions by a remote system - not "human-based" issue tracking, such as done with Jira,Trac,etc. Questions I'm looking for thoughts from developers who have used this type of system, specifically with regards to: What are essential features you couldn't do without? What are good to have features that really save you time? What features might seem a good idea, but aren't actually that useful? For example, I'd say a "show duplicates" function that identifies multiple occurrence of an error (without worrying about 'unimportant' details that might differ) is pretty essential. A button to "create an issue in [Jira/etc] for this error" sounds like a good time-saver. Just to re-iterate, what I'm after is practical experiences from people that have used such systems, preferably backed-up with why a feature is awesome/terrible. (If you're going to theorise anyway, at the very least mark your answer as such.)

    Read the article

  • Configuring NVIDIA Quadro with Dell Precision M4600

    - by vsecades
    After a frustrating couple of weeks when I recently bought my Dell Precision laptop, I managed to fix an issue where Ubuntu (yes, was NOT using Windows, get serious) would not recognize the video card and would cause all sorts of problems all over the place. I ended up one Saturday morning nearly throwing this thing away, when I managed to find a post about NVIDIA Optimus technology... ( http://www.pcmag.com/article2/0,2817,2358963,00.asp ). Now I am a huge advocate of disruptive new stuff, as long as we keep the broader audience in mind. Anyhow, disabling this (which as the BIOS settings state only work on Windows 7 or later), effectively allow the NVIDIA based Ubuntu driver to kick in full force. No need for a trash can anymore thankfully. As I saw multiple posts all over the place about this, check your BIOS, disable and try the video again to see if this corrects your issues. Best of luck!

    Read the article

  • Strategy for hosting 700+ domains names, each with a static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to manage CNAMEs on the registrar-side, but is there a way to quickly associate CNAMEs with new websites (on the hosting side), maybe using the method offered by gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • One codebase - lots of hosted services (similar to a basecamp style service) - planning structure

    - by RickM
    We have built a service (PHP Based) for a client, and are now looking to offer it to other clients as a hosted service. For this example, think of it like a hosted forum service, where a client signs up on our site, and is given a subdomain or can use their own domain, and the code picks up the domain, checks it against a 'master' users table, and then loads the content as needed. I'm trying to work out the best way of handling multiple clients. At the moment I can only think of two options that would work: Option 1 - Have 1 set of database tables, but on each table have a column called 'siteid' - this would mean every query has to check the siteid. This would effectively work with just 1 codebase, and 1 database. Option 2 - Have 1 'master' database with all the core stuff such as the client details and their domain. Then when the systen checks the domain, it pulls the clients database details (username/password/dbname) from a table, and loads a second database. The issue here is security of the mysql server details, however it does have the benefit that they are running their own database instead of sharing one. Which option would I be better taking here, and why? Ideally I want it to be fairly easy to convert the 'standalone' script to the 'multi-domain' script as we're on a tight deadline.

    Read the article

  • A Quarter Century of SPARC

    - by kemer
    You might have missed an interesting milestone: the 25th anniversary of SPARC. Twenty-five years! Almost 40% of my life: humbling, maybe a little scary. When I joined Sun Microsystems in 1988, SPARC was just starting to shake things up. The next year we introduced the SPARCstation 1, which had basically triple the performance of our Motrolla-based Sun–3 systems. Not too long after that, our competition began a campaign of “SPARC is dead.” We really distressed them with our success, in spite of our small size. “It won’t last.” “It can’t last!” So they told themselves. For a stroll down memory lane take a look at this page. I remember the sales meeting we had in Atlanta to internally announce the SPARCstation 1. Sun hadn’t really hit the big times, yet. Our much bigger competitors viewed us as an ill-mannered pest, certain of our demise. And, why wouldn’t they be certain: other startups more our size, such as Apollo (remember them?), Silicon Graphics (they fought the good fight!), and the incredibly cool Symbolics are memories. Wait! There was also a BIG company, DEC, who scoffed at us: they are history, too. In fact, we really upset them with what was supposed to be an internal-only video production that was a take-off on Bruce Lee movies, in which we battled the evil Doctor DEC – complete with computer mice (or is that “mouses”?) wielded like nun chucks with the new SPARCstation 1 somehow in the middle of everything. The memory is vivid, but the details hazy. After all, that was almost a quarter century ago. So, here’s to Oracle’s SPARC: still going strong after all these years. – Kemer

    Read the article

  • Windows XP + PAE + 6GB RAM: See more than 3.5GB?

    - by nonot1
    Firstly let me say I've seen a number of similar questions on SuperUser, and I don't think this is a duplicate. (Most address 4GB RAM installed. I have 6GB) I have Windows XP 32-bit running on a i7-based Xeon system with 6GB of RAM. I only see 3.5GB of RAM in Windows. Is there any way to squeeze more visible RAM out of this set up? Even an extra 1GB would be great. Does having 6GB (vs 4GB) of RAM installed help at all? (I.e Even if I loose the 3.5-4.0 GB region, can I use the area above it?) P.S. Will eventually move to Windows 7 64-bit, but can't for now.

    Read the article

  • Automatic Generalization

    - by Nick Harrison
    I have been interested in functional programming since college. I played around a little with LISP back then, but I have not had an opportunity since then. Now that F# ships standard with VS 2010, I figured now is my chance. So, I was reading up on it a little over the weekend when I came across a very interesting topic. F# includes a concept called "Automatic Generalization". As I understand it, the compiler will look at your method and analyze how you are using parameters. It will automatically switch to a generic parameter if it is possible based on your usage. Wow! I am looking forward to playing with this. I have long been an advocate of using the most generic types possible especially when developing library classes. Use the highest level base class that you can get away with. Use an interface instead of a specific implementation. I don't advocate passing object around, but you get the idea. Tools like resharper, fxCop, and most static code analysis tools provide guidance to help you identify when a more generalized type is possible, but this is the first time I have heard about the compiler taking matters into its own hands. I like the sound of this. We'll see if it is a good idea or not. What are your thoughts? Am I missing the mark on what Automatic Generalization does in F#? How would this work in C#? Do you see any problems with this?

    Read the article

  • MSDN Live 2010 &ndash; Delivered : 24 sessions (4 x 6) on Visual Studio and Team Foundation Server

    - by terje
    We (Mikael Nitell and me) got a whole track on the Norwegian MSDN Live tour this year.  We did these as a pair, and covered 4 cities over 4 days, 6 sessions per day, taking 8 hours to come through it.  The Islandic volcano made the travels a bit rough, but we managed 6 flights out of 8. The first one had to go by van instead, 7-8 hour drive each way together with other MSDN Live presenters – a memorable tour! Oslo was the absolute top point.  We had to change hall to a bigger one. People were crowding, and even the big hall was packed!  The presentations were mostly based on demos, but we had a few slides as well.  They have been uploaded to my SkyDrive.  Info to aliens – some of the text may be Norwegian. The sessions were as follows: Overview of news in Visual Studio and Team Foundation server 2010 Ensuring Quality with VS/TFS 2010 Releasing products with VS/TFS 2010 No More No Repro with VS/TFS 2010 Performance Testing and Parallel Programming with VS/TFS 2010 Migrating to VS/TFS 2010 Tips, tricks, news and some best practices with VS/TFS 2010   In the coming days, I will post up examples from the demos too, with explanations of how they are intended to work. These entries will also contain stuff we had to remove from the actual presentations due to the time constraints. We managed to create recordings of two of the sessions, which will be uploaded to Channel 9 by Microsoft, afaik.   I will update this blog with information about exact locations when that is done. Also note we’re (read:Osiris Data AS) running both Upgrade and Deep Dive courses  on VS/TFS 2010 now in May.  Please look here for more info. If you want to be informed, follow me on Twitter.  All blog entries will be announced on twitter.

    Read the article

  • Single hardware unit to protect web servers and implement smart publishing

    - by Maxim V. Pavlov
    Thus far we've been using the combination of Forefront TMG 2010 as an edge firewall + intrusion prevention system + web site publishing mechanism in the data center to work with a few web server machines. Since we develop on ASP.NET, we are IIS and in general - Microsoft crowd. Since TMG is being deprecated, we need to come up with a hardware alternative to protect and serve our data center web cloud. Could you please advise a hardware or virtual appliance solution that can provide routing, flood prevention and smart web-site publishing (one IP - many web sites based on domain name filter) all in one. Even if it is hard to configure, as long as it covers all these features, we will invest to learn and replace TMG eventually.

    Read the article

  • Genetic Algorithm new generation exponentially increasing

    - by Rdz
    I'm programming Genetic Algorithm in C++ and after searching all kind of ways of doing GA'a operators (selection, crossover, mutation) I came up with a doubt. Let's say I have an initial population of 500. My selection will consist in getting the top 20% of 500(based on best fitness). So I get 100 individuals to mate. When I do the crossover I'll get 2 children where both together have 50% of surviving. So far so good. I start the mutation, and everything's ok.. Now when I start choosing the Next generation, I see that I have a big number of children (in this case, 4950 if you wanna know). Now the thing is, every time I run GA, if I send all the children to the next generation, the number of individuals per generation will increase exponentially. So there must be a way of choosing the children to fulfill a new generation without getting out of this range of the initial population. What I'm asking here is if there is anyway of choosing the children to fill the new generations OR should I choose somehow (and maybe reduce) the parents to mate so I don't get so many children in the end. Thanks :)

    Read the article

  • Tips on how to notify a user of new features in your game

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • Which Ubuntu linux kernel tree matches my installed kernel?

    - by Rmano
    Answering a recent question, and before that, trying to see if a patch which is fundamental for my machine had been included in a kernel release, I have found the following problem: How can I match the kernel version I have for my kernel, which is [:~] % uname -a Linux samsung-romano 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux with the exact kernel source, which I suppose should be stored in http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=summary? In that page there are quite a lot of tags, for example: But none of them correspond to 3.13.0-29 which is my running kernel right now. The mapping should be in https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable, where it is said that the 3.13 Ubuntu kernel is based on 3.13.11 --- I think. But from there to finding the tree I have installed is not straightforward. Notice: I know I can install the kernel source corresponding with my installed kernel. But I do not want to install them; I would like ti have a pointer to the git tree to be able to browse it online (and check for commits, patches, etc.). The best options seems to go to linux3.13-y.review or linux3.13-y.queue, but I am unable to find where this tree are marked for the release - if I understand well the policy, in -review the patches are accumulated for testing, and in -queue accumulated for the next minor release/update --- but I am unable to find the exact release tree. I mean, a tag equivalent to 3.13.0-29 was cut here.

    Read the article

  • What would happen if I did a "Boot to VHD" to a VHD that was configured to run under Hyper-V?

    - by tbone
    Microsoft has a Hyper-V based VM I'm interested in running, however, I don't have access to a Windows Server 2008 machine to try it on, only a Windows 7 Pro x64 machine (Windows 7 does not support Hyper-V). This is the VM in question: This download contains three Windows Server 2008 R2 SP1 Hyper-V Virtual Machine set for evaluating and demonstrating Office 2010, SharePoint 2010 and Project Server 2010. 2010 Information Worker Demonstration and Evaluation Virtual Machine (SP1) http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=27417 I came across a somewhat relevant article from Scott Hanselman: Less Virtual, More Machine - Windows 7 and the magic of Boot to VHD: http://www.hanselman.com/blog/LessVirtualMoreMachineWindows7AndTheMagicOfBootToVHD.aspx I realize other options are to convert this VM to a VMWare compatible VM or some of the options to run it under VirtualBox. But instead of those routes, I'm wondering: What would happen if I tried to go the "Boot to VHD" route using this Hyper-V VHD? Is it possible that during the boot process, Windows would simply notice that hardware had changed and adjust accordingly, installing the appropriate drivers and continuing on without a hitch?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • Improving the efficiency of my bloom/glow shader

    - by user1157885
    I'm making a neon style game where everything is glowing but the glow I have is kinda small and I want to know if there's an efficient way to increase the size of it other than increasing the pixel sample iterations. Right now I have something like this: float4 glowColor = tex2D(glowSampler, uvPixel); //Makes the inital lines brighter/closer to white if (glowColor.r != 0 || glowColor.g != 0 || glowColor.b != 0) { glowColor += 0.5; } //Loops over the weights and offsets and samples from the pixels based on those numbers for (int i = 0; i < 20; i++) { glowColor += tex2D(glowSampler, uvPixel + glowOffsets[i] + 0.0018) * glowWeights[i]; } finalColor += glowColor; for the offsets it moves up, down, left and right (5 times each so it loops over 20 times) and the weights just lower the glow amount the further away it gets. The method I was using before to increase it was to increase the number of iterations from 20 to 40 and to increase the size of the offset/weight array but my computer started to have FPS drops when I was doing this so I was wondering how can I make the glow bigger/more vibrant without making it so CPU/Gcard intensive?

    Read the article

  • Authenticate native mobile app using a REST API

    - by Supercell
    I'm starting a new project soon, which is targeting mobile application for all major mobile platforms (iOS, Android, Windows). It will be a client-server architecture. The app is both informational and transactional. For the transactional part, they're required to have an account and log in before a transaction can be made. I'm new to mobile development, so I don't know how the authentication part is done on these platforms. The clients will communicate with the server through a REST API. Will be using HTTPS ofcourse. I haven't yet decided if I want the user to log in when they open the app, or only when they perform a transaction. I got the following questions: 1) Like the Facebook application, you only enter your credentials when you open the application for the first time. After that, you're automatically signed in every time you open the app. How does one accomplish this? Just simply by encrypting and storing the credentials on the device and sending them every time the app starts? 2) Do I need to authenticate the user for each (transactional) request made to the REST API or use a token based approach? Please feel free to suggest other ways for authentication. Thanks!

    Read the article

  • How to take a copy of wubi ubuntu system as it is?

    - by rajat
    I have installed ubuntu (as a dual boot) using wubi (Windows-based UBuntu Installer) installer for windows, and have been working in linux since then. Now that I have many projects with many dependencies, I'd want to install the same ubuntu to other machines, so that I don't need to install Ubuntu first, and then each and every project and it's dependencies. There is a folder called ubuntu in my windows drive, which was created by wubi and which contains all the ubuntu stuff. Other machines have only windows 7 installed and have the same configuration. Is there any way to install the save ubuntu I am using to the other machines ?

    Read the article

  • Brain picking during job interview

    - by mark
    Recently, I had a job interview at a big Silicon Valley company for a senior software developer/R&D position. I had several technical phone screens, an all day on-site interview and more technical phone screens for another position later. The interviews went really well, I have a PhD and working experience in the area I was applying for yet no offer was made. So far, so good. It was an interesting experience, I am employed, absolutely no hard feelings about this. Some of the interviewers asked really detailed questions to the point of being suspicious about new technologies I have been working on. These technologies are still in development and have not come to market yet. I know some major hardware/software companies are working on this too. I have had many interviews before and based on my former interviewing experience and the impression some of the interviewers left behind, I know now all this company wanted from me is to extract some ideas about what I did in this field. Remember, I am referring to a R&D position, not the standard software developer stuff. Has anybody encountered this situation so far? And how did you deal with it? I am not so much concerned about "stealing" ideas but more about being tricked into showing up for an interview when there is no intension to hire anyway. I am considering refusing technical interviews in the future and instead proposing a trial period in which the company can easily reconsider its hiring decision.

    Read the article

  • Pixelated PDF in Apple Preview slideshow mode, but not in regular window

    - by Zack
    I have a PDF which is a presentation exported from OpenOffice. Two of the slides in this presentation have embedded .eps graphs. When I run the presentation using Preview's slideshow mode, the graphs are severely aliased and the axes are illegible. But when I just view the PDF in regular windowed mode, the graphs are properly antialiased and legible. Is there any way to get Preview to do the same display that it does in windowed mode, but in fullscreen (no window title, no menu bar)? (I don't want to just run the presentation from OpenOffice, because OpenOffice shows the same horrible aliasing effects plus it takes about 30 seconds to show the slide. I don't have, and don't want, Acrobat or MS Office. However, please do feel free to suggest other programs for doing PDF-based slideshows.)

    Read the article

  • Open Source Visualization and Dashboard Software

    - by helios
    I am working on an open source Application Performance Monitoring (APM) software and looking for a visualization tool with dashboard capabilities. I came across Graphite which looks pretty good but wondering if there is anything better out there before I settle down with that tool. Here's the list of features I am interested in: Must-Have Open Source license API to submit real time data Web-based visualization interface Persistence - file or database Nice-To-Have Dashboard Capabilities: Allow users to select a few metrics (CPU, Heap Usage, # of Active Users etc.) and place them on a single page for easier monitoring. Any suggestions?

    Read the article

  • What scenarios are implementations of Object Management Group (OMG) Data Distribution Service best suited for?

    - by mindcrime
    I've always been a big fan of asynchronous messaging and pub/sub implementations, but coming from a Java background, I'm most familiar with using JMS based messaging systems, such as JBoss MQ, HornetQ, ActiveMQ, OpenMQ, etc. I've also loosely followed the discussion of AMQP. But I recently became aware of the Data Distribution Service Specification from the Object Management Group, and found there are a couple of open-source implementations: OpenSplice OpenDDS It sounds like this stuff is focused on the kind of high-volume scenarios one tends to associate with financial trading exchanges and what-not. My current interest is more along the lines of notifications related to activity stream processing (think Twitter / Facebook) and am wondering if the DDS servers are worth looking into further. Could anyone who has practical experience with this technology, and/or a deep understanding of it, comment on how useful it is, and what scenarios it is best suited for? How does it stack up against more "traditional" JMS servers, and/or AMQP (or even STOMP or OpenWire, etc?) Edit: FWIW, I found some information at this StackOverflow thread. Not a complete answer, but anybody else finding this question might also find that thread useful, hence the added link.

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >