Search Results

Search found 31774 results on 1271 pages for 'chris go'.

Page 735/1271 | < Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >

  • JavaOne Latin America Sessions

    - by Tori Wieldt
    The stars of Java are gathering in São Paulo next week. Here are just a few of the outstanding sessions you can attend at JavaOne Latin America: “Designing Java EE Applications in the Age of CDI” Michel Graciano, Michael Santos “Don’t Get Hacked! Tips and Tricks for Securing Your Java EE Web Application” Fabiane Nardon, Fernando Babadopulos “Java and Security Programming” Juan Carlos Herrera “Java Craftsmanship: Lessons Learned on How to Produce Truly Beautiful Java Code” Edson Yanaga “Internet of Things with Real Things: Java + Things – API + Raspberry PI + Toys!” Vinicius Senger “OAuth 101: How to Protect Your Resources in a Web-Connected Environment” Mauricio Leal “Approaching Pure REST in Java: HATEOAS and HTTP Tuning” Eder Ignatowicz “Open Data in Politics: Using Java to Follow Your Candidate” Bruno Gualda, Thiago Galbiatti Vespa "Java EE 7 Platform: More Productivity and Integrated HTML" Arun Gupta  Go to the JavaOne site for a complete list of sessions. JavaOne Latin America will in São Paulo, 4-6 December 2012 at the Transamerica Expo Center. Register by 3 December and Save R$ 300,00! Para mais informações ou inscrição ligue para (11) 2875-4163. 

    Read the article

  • Missing driver ASUS PCE-N53 11n N600 PCI-E Adapter

    - by oyse
    I have problems with getting an Asus PCE-N53 11n N600 PCI-E Adapter card to work on my desktop computer. As far as I can tell no drivers are installed for the card. I know I can manually download the drivers directly from Asus, but I would rather not go that route. If there are anyone that knows about any packages or other things I can do to make this work would be much appreciated. Some systems details: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise $ sudo lshw -C network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: d4:3d:7e:03:b9:1d size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=192.168.0.173 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:d000(size=256) memory:f2104000-f2104fff memory:f2100000-f2103fff *-network UNCLAIMED description: Network controller product: Ralink corp. vendor: Ralink corp. physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f7100000-f710ffff $ lsmod Module Size Used by nvidia 12319264 51 vesafb 13844 1 snd_hda_codec_hdmi 32474 1 joydev 17693 0 bnep 18281 2 rfcomm 47604 0 bluetooth 180104 10 bnep,rfcomm snd_hda_codec_realtek 224173 1 snd_seq_midi 13324 0 ppdev 17113 0 snd_rawmidi 30748 1 snd_seq_midi usbhid 47199 0 hid 99559 1 usbhid nouveau 774641 0 parport_pc 32866 1 snd_hda_intel 33773 5 ttm 76949 1 nouveau snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel drm_kms_helper 46978 1 nouveau drm 242038 3 nouveau,ttm,drm_kms_helper snd_seq_midi_event 14899 1 snd_seq_midi snd_hwdep 13668 1 snd_hda_codec snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event i2c_algo_bit 13423 1 nouveau mxm_wmi 12979 1 nouveau wmi 19256 1 mxm_wmi mac_hid 13253 0 snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec psmouse 97362 0 video 19596 1 nouveau snd_timer 29990 2 snd_seq,snd_pcm snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 20 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_rawmidi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_seq,snd_pcm,snd_timer,snd_seq_device serio_raw 13211 0 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm mei 41616 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp r8169 62099 0

    Read the article

  • Ill be Speaking at ILTAs SharePoint for Legal Symposium on June 16th 2010

    Ill be speaking at the International Legal Technology Associations SharePoint for Legal Symposium on June 16th 2010 at Microsofts offices in Downers Grove, IL.  My talk will be about Building Public-Facing Websites with SharePoint 2010.  SharePoint has quickly become a popular platform for companies to build their public-facing websites on.  Ill go over the new features in SharePoint 2010 specific to web content management, and also discuss some best practices and lessons learned from our experience building internet sites with SharePoint. The SharePoint for Legal Symposium is a two-day event with talks covering a variety of other topics such as: Enterprise Search Using SharePoint 2010 and FAST SharePoint as a Document Management System Content Classification in SharePoint 2010: Taxonomies, Folksonomies and More Im very interested in hearing from firms who have been testing SharePoint 2010 prior to RTM, particularly how they are taking advantage of the new features in SharePoint 2010, e.g. Managed Metadata. Ive made my presentation available in advance, check it out on SlideShare: ILTA Presentation - Building Public-Facing Websites with SharePoint 2010 View more presentations from gdurzi. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Suggestions required to build an ECommerce Platform

    - by Haris
    For a prospective client we have to offer a solution to provide following system: CMS Order Management Shopping Cart CRM Helpdesk Accounting & Finance Custom Functions In order to save time and to avoid reinvent the wheel our idea is to integrate different off-the-shelf solutions. Their first requirement is that the system has to be hosted in their country which I think will exclude application like Aplicor, Netsuite & Salesforce. Basically the nucleaus would be the CMS which would integrate all the other apps. PHP or .Net based solutions would be our preferences as have inhouse expertise. So far following are few combinations I have come up with: Joomla (CMS) + Virtuemart (Cart+Ordering) + Sugar CRM + Open ERP (finance) + OTRS Magento (CMS+Cart+Ordering) + Sugar CRM + Open ERP (finance) + Helpdesk Ultimate Drupal (CMS) + Ubercart (Cart+Ordering) + Sugar CRM + Open ERP (finance) + Support Ticketing System Sharepoint (CMS) + OptimusBt (Cart+Ordering) + Dynamics CRM + Great Plains + SharepointHQ Dotnetnuke (CMS) + DNNSpot (Cart+Ordering) + Sigma Pro (CRM+Helpdesk) + Open ERP For Helpdesk I liked Zendesk but the server location was the stopping factor, similar for finance and CRM I liked Aplicor. I would not like to go into detailed requirements as it would make things very complex. Could you please suggest me which options are worth enough to start looking into? What other options we have?

    Read the article

  • Ubuntu 12.04 LTS won't install - never finishes please help

    - by Richard Higgins
    Want to try Ubuntu after using Windows for 30 years. Tried to install it 5 times on a Lenovo X120e notebook and twice on a Lenovo M57 desktop. No luck, worse than what Microsoft puts you through. I burned 12.04 LTS to disc. It installs up to the "Who Are You?" screen, then stops. Accepted the recommended computer name and lower case user name. I chose "log me in automatically." After that there is no progress bar, no rotating or pulsing button, nothing to indicate the Ubuntu has not died or fallen asleep. Is that how it is written? Never heard of a program that would take a long time to install while a user looked at a locked, dead screen. I just bought the M57 desktop for my son. It came with Ubuntu 10 something. I wanted to upgrade to 12.04 but it crashed, twice, to a DOS screen saying the pc lacked a certain "init" file. Various help screen commands did not help. On the X120e, I thought a partial-failed Ubuntu install was causing the problem, so I removed the drive and deleted the Ubuntu partition and replaced it. But same result. After I fill in my name, accept computer and user name, the "continue" button does not appear to work. I can go "back" but not forward. I have waited torturous hours. It doesn't take more than two hours to install, does it?any It is my own fault because of the high expectations I had for a sensible, hassle-free installation, but I am immensely disappointed. Thank you for any response

    Read the article

  • Create MSDB Folders Through Code

    You can create package folders through SSMS, but you may also wish to do this as part of a deployment process or installation. In this case you will want programmatic method for managing folders, so how can this be done? The short answer is, go and look at the table msdb.dbo. sysdtspackagefolders90. This where folder information is stored, using a simple parent and child hierarchy format. To add new folder directly we just insert into the table - INSERT INTO dbo.sysdtspackagefolders90 ( folderid ,parentfolderid ,foldername) VALUES ( NEWID() -- New GUID for our new folder ,<<Parent Folder GUID>> -- Lookup the parent folder GUID if a child or another folder, or use the root GUID 00000000-0000-0000-0000-000000000000 ,<<Folder Name>>) -- New folder name There are also some stored procedures - sp_dts_addfolder sp_dts_deletefolder sp_dts_getfolder sp_dts_listfolders sp_dts_renamefolder To add a new folder to the root we could call the sp_dts_addfolder to stored procedure - EXEC msdb.dbo.sp_dts_addfolder @parentfolderid = '00000000-0000-0000-0000-000000000000' -- Root GUID ,@name = 'New Folder Name The stored procedures wrap very simple SQL statements, but provide a level of security, as they check the role membership of the user, and do not require permissions to perform direct table modifications.

    Read the article

  • Planning milestones and time

    - by Ignas
    I was hired by a marketing company a year ago initially for link building / SEO stuff, but I'm actually a Web developer and took the job just in desperation to have one (I'm still quite young and just finished 2nd year of University). From the 3rd day my boss realised that I'm not into that stuff at all and since he had an idea of a web based app we started to plan it. I estimated that it shouldn't take me longer than two months to do it, but as I was making it we soon realised that we want to add more and more stuff to make it even better. So the development on my own lasted for about 4 months, but then it became an enterprise size app and we hired another programmer to work along me. The guy was awesome at what he did, but because I was assigned to be programmer/project manager I had to set up milestones with deadlines and we missed most of them, because most of the time it was too much work, and my lack of experience kept me setting really optimistic deadlines. We still kept adding features and had changed the architecture of the application twice. My boss is a great guy and he gets that when we add features it expands the time frame in which things should be done so he wasn't angry at me nor the other guy. But I was feeling bad (I still am) that I suck at planning. I gained loads of experience from the programming side, but I still lack the management/planning skills which make me go nuts. So over the last year I have dedicated probably about 8 months of work to this app (obviously my studies affected it) and we're launching as a closed beta this month. So my question is how do I get better at planning/managing a project, how do you estimate the times? What do you take into consideration when setting goals. I'm working alone again because the other guy moved from the city. But I'm sure we'll be hiring to help me maintain it so I need to get better at it. Any hints, points or anything on the topic are appreciated.

    Read the article

  • How do I create a camera?

    - by Morphex
    I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera; the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there; implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • PHP Battle System for RPG game

    - by Jay
    I posted this a while ago on stackoverflow, they thought it would be better place here, I agree. Essentially I know what I want to accomplish, and I have something to the effect of what I want but I am not satisfied with it. Here's the problem. Each user has some states: STR (how hard they hit), DEF (dodging/blocking attacks), SPD (when they can strike), and STAMINA (basically their endurance in game, if this runs out they can no longer fight and lose) What I need is something like this: UserA Stats: STR: 1,000 DEF: 2500 SPD: 2000 (HP: 1000/1000) UserB Stats: STR: 1,500 DEF: 500 SPD: 4000 (HP: 1000/1000) Because the second user has double the speed, he lands twice the amount of hits on the first user, before he gets hit. Because he has less strength than the first users defence, he will do no, to little damage. This is how the battle would theoretically go: UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage, and sends him to the hospital! I was using this code, which is buggy, and not efficient, I just need a better way to do this: http://pastebin.com/15LiQQuJ Oh, and if anyone has some good ideas on how to improve the concept that would be cool too! It's not that elaborate so I'll be thinking of all sorts of things to make it more dynamic. Thanks.

    Read the article

  • Jumpstart your MySQL Cluster Knowledge

    - by Antoinette O'Sullivan
    Join companies in the web, gaming, telecoms and mobile areas by learning about MySQL Cluster's distributed, shared-nothing, real-time design. The 3 days, MySQL Cluster course teaches you how to configure and manage the cluster nodes to ensure high availability. Learn how to install different nodes and understand cluster internals. Here is a sample of some events on the schedule for this course:  Location  Date  Delivery Language  Wien, Austria  4 February, 2013 German   Prague, Czech Republic  10 December, 2012 Czech   London, England  12 December, 2012 English   Hamburg, Germany  21 January, 2013  German  Stuttgart, Germany  26 March, 2013  German  Budapest, Hungary  4 December, 2012  Hungarian  Warsaw, Poland  10 December, 2012  Polish  Lisbon, Portugal  3 December, 2012 European Portugese   Barcelona, Spain  19 November, 2012 Spanish   Madrid, Spain  25 February, 2013 Spanish   Jakarta, Indonesia  21 January, 2013 English   Singapore  29 October, 2012 English   Chicago, United States  27 March, 2013  English  Reston, United States  6 February, 2013  English For more information on the authentic MySQL curriculum go to http://oracle.com/education/mysql

    Read the article

  • Compiling GCC or Clang for thumb drive on OSX

    - by user105524
    I have a mac book that I don't have admin rights to which I would like to be able to use either GCC or clang. Since I lack admin right I can't install binutils or a compiler to /usr directory. My plan is to install both of these (using an old macbook that I do have admin rights for) to a flash drive and then run the compiler off of there. How would one go building gcc or clang so that it could run just off of a thumb drive? I've tried both but haven't had any success. I've tried doing it defining as many of the directories as possible through configure, but haven't been able to successfully build. My current configure script for gcc-4.8.1 is (where USB20D is the thumb drive): ../gcc-4.8.1/configure --prefix=/Volumes/USB20FD/usr \ --with-local-prefix=/Volumes/USB20FD/usr/local \ --with-native-system-header-dir=/Volumes/USB20FD/usr/include \ --with-as=/Volumes/USB20FD/usr/bin/as \ --enable-languages=c,c++,fortran\ --with-ld=/Volumes/USB20FD/usr/bin/ld \ --with-build-time-tools=/Volumes/USB20FD/usr/bin \ AR=/Volumes/USB20FD/usr/bin/ar \ AS=/Volumes/USB20FD/usr/bin/as \ RANLIB=/Volumes/USB20FD/usr/bin/ranlib \ LD=/Volumes/USB20FD/usr/bin/ld \ NM=/Volumes/USB20FD/usr/bin/nm \ LIPO=/Volumes/USB20FD/usr/bin/lipo \ AR_FOR_TARGET=/Volumes/USB20FD/usr/bin/ar \ AS_FOR_TARGET=/Volumes/USB20FD/usr/bin/as \ RANLIB_FOR_TARGET=/Volumes/USB20FD/usr/bin/ranlib \ LD_FOR_TARGET=/Volumes/USB20FD/usr/bin/ld \ NM_FOR_TARGET=/Volumes/USB20FD/usr/bin/nm \ LIPO_FOR_TARGET=/Volumes/USB20FD/usr/bin/lipo CFLAGS=" -nodefaultlibs -nostdlib -B/Volumes/USB20FD/bin -isystem/Volumes/USB20FD/usr/include -static-libgcc -v -L/Volumes/USB20FD/usr/lib " \ LDFLAGS=" -Z -lc -nodefaultlibs -nostdlib -L/Volumes/USB20FD/usr/lib -lgcc -syslibroot /Volumes/USB20FD/usr/lib/crt1.10.6.o " Any obvious ideas of which of these options need to be turned on to install the appropriate files on the thumb drive during installation? What other magic occurs during xcode installation which isn't occurring here? Thanks for any suggestions

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • Bazaar - pull the last revision only (and not the whole branch)

    - by Sandman4
    Shortly: How can I take the latest revision (only) from a remote bazaar repository and add it as a new revision to a local repository. Background: I have a development system and a production system. On a development system there's a bazaar repository having branch with lots of development revisions. Once in a while I want to incorporate the latest developments into production system. I want to do so by some sort of "pulling" (development system can not connect to production for security reasons, but production can initiate connection to development). On the production, I don't want the whole development revision history, only those revisions which actually go into production (normally it's the branch tip). Yet I want version control on the production system to keep track of what actually goes into production each time. bzr pull pulls the whole branch. bzr pull --revision=last:1 also pulls the whole branch, up to the specified revision. bzr merge --pull --revision=last:1 also pulls the whole branch. bzr merge --pull --revision=last:2..last:1 and bzr merge --pull --change=last:1 both pull only the new changes introduced in the latest revision, but not changes introduced in the older revisions. With lightweight checkout I have no track of revisions which are pulled into production - local working tree remains part of the remote repository The only way I see so far is importing the working tree using some rsync or scp and committing them to a local branch afterwards. Any better ideas ?

    Read the article

  • I used a 301 Permanent Redirect to a 3rd party site by mistake! Can I stop the redirection?

    - by Dees
    Oh Noes! I've been parking a domain name for a friend/client of mine on my hosting provider (Dreamhost, FWIW) for a while, and they eventually asked me to redirect their domain to a 3rd party website which is currently featuring some relevant promotional content. Once this period ends, we will probably go ahead and set up a proper website for the domain on my hosting account. I used Dreamhost's "redirect" hosting option in their domain configuration panel, not realizing that it would implement a 301 Permanent redirect, or what the implications were. Now it seems that for any client that has visited the site anytime recently, the 301 redirect is still cached/in effect, although I have changed the domain settings back to regular Dreamhost full site hosting. It seems that the only thing that can be done is to wait out the TTL/cache expiration for the redirect. I have no idea how long that might be, so I'm wondering if there is any good way to cache-bust the redirect or otherwise undo its long-term effects. I put a simple html meta refresh in the domain folder to replace the 301 to keep the intended functionality in place, but I'm still not able to access the domain's other content normally, even via FTP, etc. Isn't there anything I can do? Otherwise, how long does it take for a cached redirect to expire? It's gonna be a bummer if it's really permanent.

    Read the article

  • Why The Athene Group Chose Fusion CRM

    - by Tony Berk
    A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year, The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

  • Automatically triggering standard spaceship controls to stop its motion

    - by Garan
    I have been working on a 2D top-down space strategy/shooting game. Right now it is only in the prototyping stage (I have gotten basic movement) but now I am trying to write a function that will stop the ship based on it's velocity. This is being written in Lua, using the Love2D engine. My code is as follows (note- object.dx is the x-velocity, object.dy is the y-velocity, object.acc is the acceleration, and object.r is the rotation in radians): function stopMoving(object, dt) local targetr = math.atan2(object.dy, object.dx) if targetr == object.r + math.pi then local currentspeed = math.sqrt(object.dx*object.dx+object.dy*object.dy) if currentspeed ~= 0 then object.dx = object.dx + object.acc*dt*math.cos(object.r) object.dy = object.dy + object.acc*dt*math.sin(object.r) end else if (targetr - object.r) >= math.pi then object.r = object.r - object.turnspeed*dt else object.r = object.r + object.turnspeed*dt end end end It is implemented in the update function as: if love.keyboard.isDown("backspace") then stopMoving(player, dt) end The problem is that when I am holding down backspace, it spins the player clockwise (though I am trying to have it go the direction that would be the most efficient at getting to the angle it would have to be) and then it never starts to accelerate the player in the direction opposite to it's velocity. What should I change in this code to get that to work? EDIT : I'm not trying to just stop the player in place, I'm trying to get it to use it's normal commands to neutralize it's existing velocity. I also changed math.atan to math.atan2, apparently it's better. I noticed no difference when running it, though.

    Read the article

  • .NET and SMTP Configuration

    - by koevoeter
    Sometimes I feel stupid about discovering .NET features that have been there since an old release (2.0 in this case)... Apparently you can just use this configSecion “mailSettings” and never have to configure your SmtpClient instance in code again (no, not hard-coded): <system.net>     <mailSettings>         <smtp deliveryMethod="Network" from="My Display Name &lt;[email protected]&gt;">             <network host="mail.server.com" />         </smtp>     </mailSettings> </system.net> Now you can go all like: new SmtpClient().Send(mailMessage); …and everything is configured for you, even the from address (which you can obviously override).

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • Need to re-build an application - how?

    - by Tom
    For our main system, we have a small monitor application that sits outside our network and periodically tries to log in to verify the system still works. We have a problem with the monitor though in that the communications component set (Asta 3 inside Delphi applications) doesn't always connect through. Overall, I'd say it's about 95% reliable, but that other 5% kills the monitor since it will try to log in and hang on the connection attempt (no timeout in the component). This really isn't an issue on the client side of the system since the clients don't disconnect and reconnect repeatedly on the same application instance, but I need a way to make sure the monitor stays up and continues working even when the component fails on a run. I have a few ideas as to which way to have the program run, the main idea being to put the communications inside a threaded data module so that if one thread crashes then another thread can test later and the program keep going. Does this sound like a valid way to go? Any other ideas how to ensure a reliable monitoring application with a less than 100% reliable component? Thanks. P.S. Not sure these tags are the most appropriate. Tried including "system-reliability" as one, but not high enough rep to create.

    Read the article

  • Keeping files that are often changed in sync between desktop and laptop

    - by N.N.
    I'm looking for a way to keep a desktop and a laptop in sync. What I want to keep in sync are some folders, mainly ~/Documents, that are changed often when working on them. If it matters I can connect to my desktop from anywhere via an URL but my laptop is harder to access since it might be behind NAT and such. I have been looking at Ubuntu One but it seems to not go well with working on documents written in LaTeX. If I work on a .tex file in the Ubuntu One directory and compile it (with pdflatex) every now and then (as often as every 10 sec when working) it will create several new files including a pdf which are uploaded to Ubuntu One and this seems stupid since it will create continuous upload when working on .tex files. I also usually keep .tex documents version controlled by git and then every commit (which also can happen frequently) will cause upload (by changes in ./.git) so that it happens continuously when working. Another example is editing images that are saved often. What I think would be best is for sync to happen every tenth minute or at the end of every working session (but there might be some other way to handle this?).

    Read the article

  • Remote Working & Relocation

    - by James Burgess
    Sorry if this question is a duplicate, I did some extensive searching and found nothing on quite the same topic (though a couple on partially-overlapping topics). Recently, whilst on holiday in Munich, Germany, I was taken aback by the sheer number of programming-related posts available in the city that I easily qualify for (both in terms of knowledge, and experience). The advertised working environments seemed good and the pay seemed to be at least as good as what I'd expect here in the UK. Probably 80% of the advertisements I saw on the underground were for IT-related jobs, and a good 60% of those I was easily qualified for. At the moment, I work as a freelancer mostly on web and small software projects, but seeing the vast availability of jobs in Munich versus my local area has me thinking about remote working. I'm unable to relocate for a job for the next 3 years (my wife has a contract to continue being a doctor at her current hospital for that time) but would almost certainly be open to it after that (after all, my wife and I both love Munich). In the meanwhile, I would be very interested in remote-working. So, my question is thus do companies ever take on remote workers (even with semi-frequent trips to the office) from abroad, with a view to later relocation? And, if so, how do you go about broaching the topic with a recruiter when getting in contact about a job posting? Language isn't a barrier for me, here, as 90% of the jobs I've looked up in Munich don't require German speakers (seems they have a big recruiting market abroad). I'm also under no illusions about the disadvantages of remote working, but I'm more interested in the viability of the scenario rather than the intricacies (at least at this point). I'd really appreciate any contributions, especially from those who have experience with working in such a scenario!

    Read the article

  • The NEW MySQL for Developers Course

    - by Antoinette O'Sullivan
    Just Released - The new MySQL for Developers training course.  This 5 day course covers everything a developer needs to know when planning, designing and implementing applications using MySQL, with realistic examples using languages such as Java and PHP. This course gives an in-depth coverage of statements that access and modify data, and shows the student how to design and create other MySQL objects such as triggers, views, and stored procedures. You can take this course: From your desk as a live virtual offering. There are over 800 events on the schedule so you should find one in a timezone near you. The virtual events are also delivered in many languages including English, German, Korean, Latin American Spanish, ... In a classroom. Here is a sample of events on the schedule:  Location  Date  Delivery Language  Prague, Czech Republic 8 October 2012 Czech  Warsaw, Poland 5 November 2012 Polish  Wien, Austria  12 November 2012 German  London, England 15 October 2012 English  Bern, Switzerland  11 April 2013 German  Zurich, Switzerland 14 November 2012 German  Milan, Italy 19 November 2012  Italian  Rome, Italy  15 October 2012  Italian  Gummersbach, Germany  11 February 2013 German  Hamburg, Germany  12 November 2012  German Munich, Germany  10 June 2013  German  Lisbon, Portugal 26 November 2012 European Portuguese  Porto, Portugal 18 February 2013 European Portuguese  Nairobi, Kenya  19 November 2012  English  Madrid, Spain  10 December 2012  Spanish Petaling Jaya, Malaysia  15 October 2012  English  Bangkok, Thailand  29 October 2012  English For further information on the Authentic MySQL Curriculum, to register for an event or express interest in an additional event, go to http://oracle.com/education/mysql.

    Read the article

  • NINE Great Reasons to Attend the GlassFish Community Event at JavaOne 2012

    - by Alexandra Huff
    Are you coming to the annual GlassFish Community Event at JavaOne this year? Here are nine great reasons not to miss it! Great company Meet and mingle with community leaders and luminaries, the GlassFish engineering team, and Oracle executives! Learn from others How are your peers using GlassFish in creative ways? A few community members will share their challenges and creative solutions. Ask tough questions Meet Oracle GlassFish and Middleware executives; the panel discussion will be moderated by one of our stellar community leaders! Shirts! Be sure to get this year's GlassFish T-shirt, designed by and voted on by YOU, our community members! Don't miss it - they go fast. Share your story Give us a two minute update on why you love GlassFish and how you are using it! We will immortalize you in a very brief video and post it to our GlassFish Stories page! Find out... about the new book, hot off the press, authored by our very own Arun Gupta: "Java EE 6 Pocket Guide: A Quick Reference for Simplified Enterprise Java Development" If you share... your story, you will win a copy of Arun's new book as our thank you gift! Suggest... some ideas on how to make GlassFish even better! Have fun Lively discussion, news and updates, excellent company -- this is THE place to be on Sunday at JavaOne! Convinced? Excellent! Then please register here! A JavaOne Pass is required to enter Moscone Center. All passes accepted, including Discover, Exhibitor, Press, Blogger, etc. Agenda 11:00 - 11:05: Introduction 11:05 - 11:30: Roadmap and Community Updates 11:30 - 12:15: Q&A with Executive Speaker Panel from Oracle and the GlassFish Team 12:15 - 01:00: Customer Testimonials Location: Moscone West, Room 2005 Add sessions UGF10359 and UGF10360 to Schedule Builder

    Read the article

< Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >