Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 609/893 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • ASP.NET Web Forms is bad, or what am I missing?

    - by iveqy
    Being a PHP guy myself I recently had to write a spider to an asp.net site. I was really surprised by the different approach to ajax and form-handling. For example, in the PHP sites I've worked with, a deletion of a database entry would be something like: GET delete.php?id=&confirm=yes and get a "success" back in some form (in the ajax case, probably a json reply). In this asp.net application you would instead post a form, including all inputs on the page, with a huge __VIEWSTATE and __EVENTVALIDATION. This would be more than 10 times as big as above. The reply would be the complete side again, with a footer containing some structured data for javascript to parse and display the result. Again, the whole page is sent, and then throwed away(?) since it's already displayed. Why not just send the footer with the data to parse (it's not json nor xml but a | separated list). I really can't see why you would design a system that way. Usually you've a fast client, and a somewhat fast server but a really slow connection. Why not keep the datatransfer to a minimum? Why those huge __VIEWSTATE and __EVENTVALIDATION? It seems that everything is done way to chatty and way to complicated. I really can't see the point and that usually means that I'm missing something. So please tell me, what are the reasons for this design and what benefits (and weaknesses) does it have? (Yes I know that __VIEWSTATE is used to tell what type of form-konfiguration should be sent back to the server. But WHY is this needed?) Please keep this discussion strictly technical and avoid flamewars. Update: Please excuse the somewhat rantish question. I tried to explain my view to be able to get a better answer. I am not saying that asp.net is bad, I am saying that I don't understand the meaning of those concepts. Usually that means that I've things to learn instead of the concepts beeing wrong. I appreciate the explanations about that "you don't have to do this way in asp.net", I'll read up on MVC and other .net technologies. However, there most be a reason for this site (the one I referred to) to be written the way it is. It's written by professionals for a big organisation with far more experience than what I've. Any explanation about their (possible) design choice would be welcome.

    Read the article

  • Remove Clutter from the Opera Speed Dial Page

    - by Asian Angel
    Do you want to clean up the Speed Dial page in Opera so that only the thumbnails are visible? Today we show you a couple of tweaks that will make it happen. Speed Dial Page The search bar and text at the bottom take up room and add clutter to the look and feel of Opera’s Speed Dial page. Changing the Settings Two small tweaks to the config settings will clean it all up. To get started type opera:config into the address bar and press enter. Type “speed” into the quick find bar and look for the Speed Dial State entry. Change the 1 to 2 and click save. You will see the following message concerning the changes…click OK. Next type “search” into the quick find bar and look for the Speed Dial Search Type entry. Remove all of the text in the blank and click save. Once again you will see a message about the latest change that you have made. At this point you may need to restart Opera for both changes to take full effect. There will be a noticeable difference in how the Speed Dial page looks afterwards and is much cleaner without the Search bar and text field. You will also still be able to access the right click context menu just like before. Conclusion If you have been looking to get a cleaner and less cluttered Speed Dial page in Opera, then these two little hacks will get the job done! Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageReplace Google Chrome’s New Tab Page with Speed DialSpeed up Windows Vista Start Menu Search By Limiting ResultsBlank New Tab Quick-Fix for Google ChromeMonitor and Control Memory Usage in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • 12.04 boots fine, with graphical splash screen, but then Monitor "out of range"

    - by Jim Bednar
    I see dozens of posts from people whose monitors are saying "out of range" under Ubuntu; seems like there are some serious problems in Ubuntu with autodetection of monitor capabilities. :-( But none of the many, many suggestions I found have solved my problem, and right now I can't use anything graphical on this machine! History: I installed Ubuntu 12.04 on my HP Proliant Microserver N40L, which worked reasonably at the default resolution across several reboots. At some point I noticed that the proprietary video driver was not in use, and tried to install one to get better window-drawing speeds, but it failed with some sort of error, and I gave up on that. A few weeks later when I next rebooted, it showed the usual BIOS screen and various boot loading screens (including GRUB), and then the usual purple Ubuntu splash screen with the dots showing that things were loading, but when it finished booting the monitor went black and eventually showed "Out of range" (with no other information). Given that there were several weeks between reboots (it's a server, after all), I've no idea if it was some system update, trying to install the proprietary drivers, or something else that caused the problem. Anyway, the system has booted fine, as I can do Ctrl-Alt-F1 to get a text prompt and can log in there. But Ctrl-Alt-F7 goes back to the out of range error. Some posters said to try Ctrl-Alt-- (minus) to cycle through resolutions until one works, but that didn't have any visible effect. Many, many others said it was a grub problem, which seems unlikely given that grub's screen looks fine, but I tried editing /etc/default/grub to set a particular resolution (trying many of them) and running update-grub, with no apparent effect. Rebooting into failsafe mode works the same as regular mode. Replacing xorg.conf with xorg.conf.failsafe works the same too. I'm at my wits' end! Isn't there anything I can do to convince Ubuntu to choose a mode that the monitor supports? E.g. the one that it is using for the splash screen? I don't need great resolution on this machine, just anything that works!!!!! Help!!!!!! Please!!!!

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Tell me a Story

    - by Geoff N. Hiten
    I recently had a friend ask me to review his resume.  He is a very experienced DBA with excellent skills.  If I had an opening I would have hired him myself.  But not because of the resume.  I know his skill set and skill levels, but there is no way his standard resume can convey that.  A bare bones list of job titles and skills does not set you apart from your competition, nor does it convey whether you have junior or senior level skills and experience.  The solution is to not use the standard format. Tell me a story.  I want to know what you were responsible for.  Describe a tough project and how you saved time/money/personnel on that project.  Link your work activity to business value.  Drop some technical bits in there since we do work in a technical field, but show me what you can do to add value to my business well above what I would pay you.  That will get my attention. The resume exists for one primary and one secondary reason.  The primary reason is to get the interview.  A Resume won’t get you a job, so don’t expect it to.  The secondary reason is to give you and the interviewer a starting point for conversations.  If I can say “Tell me more about when….” and reference an item from your resume, then that is great for both of us.  Of course, you better be able to tell me more, both from the technical and the business side, at least if I am hiring a senior or higher level position.  As for the junior DBAs, go ahead and tell your story too.  Don’t worry about how simple or basic your projects or solutions seem.  It is how you solved the problem and what you learned that I am looking for.  If you learn rapidly and think like a DBA, I can work with that, regardless of you current skill level.

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • Ubuntu Server and setting up two nic cards

    - by kmalik
    I have ubuntu server on a computer with a wireless and hardwired nic card. The wireless needs to get the internet and pass it to the ubuntu server as well as pass it along to the hardwired nic card to more computers. I am having issues getting the basic set up as I believe the route table is grabbing from the wrong nic card. The router is 192.168.1.0 and the server is set to 192.168.1.11 on the wireless card through DHCP ETH0 (wired nic card) is set up to be 10.10.10.0 and the server is 10.10.10.1) I am not a linux or networking guru but basically I am trying to have internet come from a guest network 192.168.1.0 i believe to give internet to the ubuntu server then the ubuntu server will also A) have the wired nic serve DHCP addresses to other computers via a switch or router (that acts as a switch) via 10.10.10.0 addresses. And I would love if it also passed along internet capabilities as well if possible. Bu really at this point my hope is to at least get the internet working on the server and the DHCP to pass correctly. At the moment the specific issue I am having is getting ubuntu server to connect to the internet and have both nic cards up and running correctly. Any help would be appreciated! The route table is as follows: Destination Gateway GM Flags Metric Iface 0.0.0.0 10.10.10.1 0.0.0.0 UG 100 eth0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 eth0 1992.168.1.0 0.0.0.0 255.255.255.255.0 U 0 eth1 My interfaces is set up as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.10.1 netmask 255.255.255.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.1 domain-name-servers 192.168.1.0 auto eth1 iface eth1 inet dhcp netmask 255.255.255.0 gateway 192.168.1.0 wpa-driver wext wpa-ssid "ssid_name" wpa-ap-scan 1 wpa-proto wpa wpa-pairwise ccmp wpa-group ccmp wpa-key-mgmt wpa-psk wpa-psk "HASH" My DHCPD.conf (as there is a domain name server on here is as follows): ddns-update-style none default-lease-time 600 max-lease-time 7200 authoritative option domain-name "Kamron's Network" option subnet-mask 255.255.255.0 option broadcast-address 10.10.10.255 option routers 192.168.1.0 option domain-name-server 192.168.1.0 98.223.128.213 ooption subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.99 } log-facility local7

    Read the article

  • Sometimes keyboard & touchpad work... sometimes not

    - by Voyagerfan5761
    When I first ran Ubuntu from CD on this Dell Inspiron 2650, it worked for about ten to fifteen minutes, then it hung (I was probably trying to do too much at once from a Live CD). The next time, my mouse and keyboard didn't work. I rebooted three times and finally got them working. I then installed Ubuntu alongside Windows XP. After installing, selecting the OS in GRUB worked, but my touchpad and keyboard were again not working. I rebooted, and they worked. (I fortunately had a USB mouse with which to reboot.) Booting Ubuntu and then rebooting to enable my keyboard and touchpad has become a routine ever since. Often several reboots are required; at one point I had to reboot over a dozen times in a row before getting a session where everything worked properly. (My installation has been in place for about three days a week now.) I've looked around for a device manager equivalent to no avail. Sometimes the hardware is properly detected, and sometimes it's not. Once or twice I've had the keyboard detected properly but the touchpad not. Plugging in my wireless card also sometimes requires a plug, unplug, and plug again to get it working. So is there some solution? I'm without an Internet connection at home, and this "laptop" is really a wall wart on my desk, so suggestions for packages may take a while to test. Xorg logs I captured two three four sample Xorg logs: one from a startup where the devices worked; one from when they didn't; one from a session where Ubuntu thought my touchpad was a normal mouse; and one from a session where my keyboard worked but the touchpad didn't. See this gist. Updated 2010-12-15 01:50 UTC with Xorg.0.log.keyboardonly file illustrating the case where the keyboard worked but not the touchpad. Updated 2011-01-11 04:10 UTC with Xorg.0.log.touchpadregmouse to illustrate a case where the touchpad was detected as a regular mouse (no "Touchpad" tab in mouse prefs).

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • Setting up port forwarding for 7000 appliance VM in VirtualBox

    - by uejio
    I've been using the 7000 appliance VM for a lot of testing lately and relied on others to set up the networking for the VM for me, but finally, I decided to take the dive and do it myself.  After some experimenting, I came up with a very brief number of steps to do this all using the VirtualBox CLI instead of the GUI. First download the VM image and unpack it somewhere.  I put it in /var/tmp. Then, set your VBOX_USER_HOME to some place with lots of disk space and import the VM: export VBOX_USER_HOME=/var/tmp/MyVirtualBoxVBoxManage import /var/tmp/simulator/vbox-2011.1.0.0.1.1.8/Sun\ ZFS\ Storage\ 7000.ovf (go get a cup of tea...) Then, set up port forwarding of the VM appliance BUI and shell:First set up port as NAT:VBoxManage modifyvm Sun_ZFS_Storage_7000 --nic1 nat Then set up rules for port forwarding (pick some unused port numbers):VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestssh,tcp,,4622,,22"VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestbui,tcp,,46215,,215" Verify the settings using:VBoxManage showvminfo Sun_ZFS_Storage_7000 | grep -i nic Start the appliance:$ VBoxHeadless --startvm Sun_ZFS_Storage_7000 & Connect to it using your favorite RDP client.  I use a Sun Ray, so I use the Sun Ray Windows Connector client: $ /opt/SUNWuttsc/bin/uttsc -g 800x600 -P <portnumber> <your-hostname> & The portnumber is displayed in the output of the --startvm command.(This did not work after I updated to VirtualBox 4.1.12, so maybe at this point, you need to use the VirtualBox GUI.) It takes a while to first bring up the VM, so please be patient. The longest time is in loading the smf service descriptions, but fortunately, that only needs to be done the first time the VM boots.  There is also a delay in just booting the appliance, so give it some time. Be sure to set the NIC rule on only one port and not all ports otherwise there will be a conflict in ports and it won't work. After going through the initial configuration screen, you can connect to it using ssh or your browser: ssh -p 45022 root@<your-host-name> https://<your-host-name>:45215 BTW, for the initial configuration, I only had to set the hostname and password.  The rest of the defaults were set by VirtualBox and seemed to work fine.

    Read the article

  • Parallel MSBuild FTW - Build faster in parallel

    - by deadlydog
    Hey everyone, I just discovered this great post yesterday that shows how to have msbuild build projects in parallel Basically all you need to do is pass the switches “/m:[NumOfCPUsToUse] /p:BuildInParallel=true” into MSBuild. Example to use 4 cores/processes (If you just pass in “/m” it will use all CPU cores): MSBuild /m:4 /p:BuildInParallel=true "C:\dev\Client.sln" Obviously this trick will only be useful on PCs with multi-core CPUs (which we should all have by now) and solutions with multiple projects; So there’s no point using it for solutions that only contain one project.  Also, testing shows that using multiple processes does not speed up Team Foundation Database deployments either in case you’re curious Also, I found that if I didn’t explicitly use “/p:BuildInParallel=true” I would get many build errors (even though the MSDN documentation says that it is true by default). The poster boasts compile time improvements up to 59%, but the performance boost you see will vary depending on the solution and its project dependencies.  I tested with building a solution at my office, and here are my results (runs are in seconds): # of Processes 1st Run 2nd Run 3rd Run Avg Performance 1 192 195 200 195.67 100% 2 155 156 156 155.67 79.56% 4 146 149 146 147.00 75.13% 8 136 136 138 136.67 69.85%   So I updated all of our build scripts to build using 2 cores (~20% speed boost), since that gives us the biggest bang for our buck on our solution without bogging down a machine, and developers may sometimes compile more than 1 solution at a time.  I’ve put the any-PC-safe batch script code at the bottom of this post. The poster also has a follow-up post showing how to add a button and keyboard shortcut to the Visual Studio IDE to have VS build in parallel as well (so you don’t have to use a build script); if you do this make sure you use the .Net 4.0 MSBuild, not the 3.5 one that he shows in the screenshot.  While this did work for me, I found it left an MSBuild.exe process always hanging around afterwards for some reason, so watch out (batch file doesn’t have this problem though).  Also, you do get build output, but it may not be the same that you’re used to, and it doesn’t say “Build succeeded” in the status bar when completed, so I chose to not make this my default Visual Studio build option, but you may still want to. Happy building! ------------------------------------------------------------------------------------- :: Calculate how many Processes to use to do the build. SET NumberOfProcessesToUseForBuild=1  SET BuildInParallel=false if %NUMBER_OF_PROCESSORS% GTR 2 (                 SET NumberOfProcessesToUseForBuild=2                 SET BuildInParallel=true ) MSBuild /maxcpucount:%NumberOfProcessesToUseForBuild% /p:BuildInParallel=%BuildInParallel% "C:\dev\Client.sln"

    Read the article

  • Express your personality and potential @ Oracle

    - by jessica.ebbelaar(at)oracle.com
    Ciao, my name is Michel and I am a 24 year old guy from Forlì, Italy, working as a Business Intelligence Business Development Consultant in Rome. After I completed the Bachelor's Degree in Business Administration at Bologna University, I took a Multiple Master of Science in International Management organized by three European Universities: Bologna University (IT), ICN Business School of Nancy (FR) and Uppsala University (SE).I therefore had the chance to travel a lot and, most important, to study and meet hundreds of people from all over the world. This experience enhanced the passion I foster for international environments, different cultures and countries; not to mention the learning of foreign languages. Working for such a structured multinational as Oracle totally reflects my desire to be surrounded by a multicultural and international atmosphere, having the opportunity to grow from the personal point of view and to endlessly boost my career path. Demand Generation My department is responsible for demand generation activities. That implies, for instance, the implementation of various strategies aimed to feed the pipeline for Business Intelligence products in the Italian market. Organization of marketing campaigns, events, providing ideas or contacts to the sales force is just a few examples of our work. I like to define the role of the business development as something that translates the marketing insights into tools to increase the sales, accounting the differences amongst countries, companies and industries. Furthermore, it is an important feature to collaborate with the EMEA team to share knowledge and best practices. My initial lack of an IT background has been constantly covered by the managers and my personal mentor. The thing I appreciated most is indeed the fact I always feel to be a growing potential, becoming essential day after day. I am surprised by the trust and confidence people have on me and how they proudly encourage my personal initiative and always spur me to contribute. Career Ambitions If your ambitions are to work within an international but extremely people focused environment, to contribute to the growth of one of the most successful companies in the world, to deal with a fast-paced industry and highly competitive market, to have the chance to fully express your personality and potential and to satisfy your career ambitions over the years, then Oracle is right for YOU. Looking forward to having YOU aboard! Do you want to find out more about the open roles within Oracle? Follow us on http://campus.oracle.com.

    Read the article

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • How can a solo programmer become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

  • changing drive nodes & hdparm

    - by Kalamalka Kid
    I am currently attempting to create a command that works at startup to kill the power on two of my very noisy hard drives. I have edited the etc/rc.local file to include this command: sudo hdparm -y /dev/sdc sudo hdparm -y /dev/sdd exit 0 While I think this should work, it seems the allocated drives keep switching around every time I reboot. I have sda, sdb, sdc, sdd, and sde but they keep getting jumbled around (making the drive I wish to shut different than sdd which is making the task of shutting down the right drive on start-up quite cumbersome. I had a perfectly functioning ftstab file working which disappeard, but I restored it from the back up into the etc/ dir: # <file system> <mount point> <type> <options> <dump> <pass> #Entry for /dev/sda1 : UUID=43c09daf-08a5-44f2-89b0-fc7c6f0d1e67 / ext4 errors=remount-ro 0 1 #Entry for /dev/sdd1 : UUID=443AFBAD7FE50945 /media/DX100 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=FCE456F5E456B21E /media/GalaxyM83 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdf1 : UUID=1CA057FDA057DBB8 /media/Holideck ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdc1 : UUID=7ABB49654B799D40 /media/JX3P ntfs defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 it seems every time I boot the order of the drives changes. I do not know how to resolve this. A quick workaround the problem was to go with UUID instead of the DEV letter by editing the etc/rc.local file to include: hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 So I thought I was in the clear, as I heard both hard drives die down during the boot sequence, BUT, as soon as I log in both drives start up again! so now I have to figure out what is making them start up again after log in, or perhaps another way to get them to turn off. Is there some kind of command i can get to execute after log in? I tried editing the startup applications to include an autossh with: autoshh - sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 autoshh - sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 but this did not seem to work to turn off the disks after log in.

    Read the article

  • Lubuntu wireless issue with Broadcom chipset

    - by Variant Web Solutions
    I'm a web dev that just started a new venture in buying wiped laptops in bulk and selling them at a low cost to lower income families that want a laptop that will simply preform and not become virus ridden and require constant maintenance, so naturally I opted for a linux distro and after some research, lubuntu was my top pick. I'm not a stranger to linux as all of my servers for my web dev business are linux, but I am however new to L/Ubuntu and I'm having some issues with both wifi (broadcom chipsets) on the 20 or so dells that I have right now, D800's, D810's and E5400's. Not sure if you can point me in a solid direction, I've scoured (and implemented) the suggestions on ask ubuntu and still coming up short. On one of the e5400's ( though they all seem to suffer the same errors) I got the following: [code]dell-latitude-e5400@dell-Latitude-E5400:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. dell-latitude-e5400@dell-Latitude-E5400:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 02:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 02:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 02:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5761e Gigabit Ethernet PCIe (rev 10) dell-latitude-e5400@dell-Latitude-E5400:~$ rfkill Usage: rfkill [options] command Options: --version show version (0.5-1ubuntu1 (Ubuntu)) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm nfc dell-latitude-e5400@dell-Latitude-E5400:~$ [/code]

    Read the article

  • Code Coverage for Maven Integrated in NetBeans IDE 7.2

    - by Geertjan
    In NetBeans IDE 7.2, JaCoCo is supported natively, i.e., out of the box, as a code coverage engine for Maven projects, since Cobertura does not work with JDK 7 language constructs. (Although, note that Cobertura is supported as well in NetBeans IDE 7.2.) It isn't part of NetBeans IDE 7.2 Beta, so don't even try there; you need some development build from after that. I downloaded the latest development build today. To enable JaCoCo features in NetBeans IDE, you need do no different to what you'd do when enabling JaCoCo in Maven itself, which is rather wonderful. In both cases, all you need to do is add this to the "plugins" section of your POM: <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.5.7.201204190339</version> <executions> <execution> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>report</id> <phase>prepare-package</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> Now you're done and ready to examine the code coverage of your tests, whether they are JUnit or TestNG. At this point, i.e., for no other reason than that you added the above snippet into your POM, you will have a new Code Coverage menu when you right-click on the project node: If you click Show Report above, the Code Coverage Report window opens. Here, once you've run your tests, you can actually see how many classes have been covered by your tests, which is pretty useful since 100% tests passing doesn't mean much when you've only tested one class, as you can see very graphically below: Then, when you click the bars in the Code Coverage Report window, the class under test is shown, with the methods for which tests exist highlighted in green and those that haven't been covered in red: (Note: Of course, striving for 100% code coverage is a bit nonsensical. For example, writing tests for your getters and setters may not be the most useful way to spend one's time. But being able to measure, and visualize, code coverage is certainly useful regardless of the percentage you're striving to achieve.) Best of all about all this is that everything you see above is available out of the box in NetBeans IDE 7.2. Take a look at what else NetBeans IDE 7.2 brings for the first time to the world of Maven: http://wiki.netbeans.org/NewAndNoteworthyNB72#Maven

    Read the article

  • Oracle HCM Cloud Customer Q&A with WAXIE Sanitary Supply

    - by HCM-Oracle
    At this year’s Oracle HCM User Group (OHUG) Global conference, we had the opportunity to sit down with Oracle HCM Cloud customers for a short Q&A. We got to hear about what brought them to the OHUG conference, some of the benefits they are receiving from their Oracle HCM Cloud solutions, and advice they would give other businesses looking to move to the cloud.  Below is a discussion we had with Melissa Halverson, Benefits & HRIS Manager at WAXIE Sanitary Supply.  Q: What made you attend the OHUG Global Conference this year? Halverson: The biggest reason is networking. It allows me to connect with others in the Oracle HCM Cloud community. I was able to speak at the HCM Cloud SIG (Special Interest Group) on the first day and share my experiences as well as hear the experiences of other Oracle HCM Cloud users. It also allows me to get face-time with key people within Oracle.  Q: What Oracle HCM solutions are you currently using? Halverson: Global HR, Benefits, Workforce Compensation, and Performance Management. Q: Do you plan to invest further in Oracle HCM? Halverson: Yes, we are interested in Time and Labor. We would also like to get Recruiting at some point in the future. Q: What would you say is the most significant benefit you’ve realized from your use of Oracle HCM solutions? Halverson: First and foremost would be process improvement. Before we had Oracle HCM Cloud we relied on a paper process where something as simple as an employee address change required changes to be made manually in 9 different systems. Obviously that was extremely inefficient, but also increased the likelihood of errors being made.  The other huge benefit we have seen was in making information visible to the people that need it. Prior to implementing Oracle HCM Cloud, it was very difficult for anyone to access and make use of the information in our systems. Now, we can provide this information to those who need it to make better decisions.  Q: What advice would you give an organization looking to move their HR systems to the cloud? Halverson: One thing I think many organizations don't spend enough time doing is thoroughly vetting their implementation partner. I believe you should be vetting your implementation partner as much as you did the system itself. Also, manpower is so important. Involve as large a team as possible because you don’t want to get stuck having too few bodies to help out. And set realistic time frames. Biting off more than you can chew will inevitably result in failure. Having a phased approach is always best rather than trying to do everything at once. Thanks for the tips Melissa. Enjoy the rest of the conference!

    Read the article

  • Lock mouse in center of screen, and still use to move camera Unity

    - by Flotolk
    I am making a program from 1st person point of view. I would like the camera to be moved using the mouse, preferably using simple code, like from XNA var center = this.Window.ClientBounds; MouseState newState = Mouse.GetState(); if (Keyboard.GetState().IsKeyUp(Keys.Escape)) { Mouse.SetPosition((int)center.X, (int)center.Y); camera.Rotation -= (newState.X - center.X) * 0.005f; camera.UpDown += (newState.Y - center.Y) * 0.005f; } Is there any code that lets me do this in Unity, since Unity does not support XNA, I need a new library to use, and a new way to collect this input. this is also a little tougher, since I want one object to go up and down based on if you move it the mouse up and down, and another object to be the one turning left and right. I am also very concerned about clamping the mouse to the center of the screen, since you will be selecting items, and it is easiest to have a simple cross-hairs in the center of the screen for this purpose. Here is the code I am using to move right now: using UnityEngine; using System.Collections; [AddComponentMenu("Camera-Control/Mouse Look")] public class MouseLook : MonoBehaviour { public enum RotationAxes { MouseXAndY = 0, MouseX = 1, MouseY = 2 } public RotationAxes axes = RotationAxes.MouseXAndY; public float sensitivityX = 15F; public float sensitivityY = 15F; public float minimumX = -360F; public float maximumX = 360F; public float minimumY = -60F; public float maximumY = 60F; float rotationY = 0F; void Update () { if (axes == RotationAxes.MouseXAndY) { float rotationX = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivityX; rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0); } else if (axes == RotationAxes.MouseX) { transform.Rotate(0, Input.GetAxis("Mouse X") * sensitivityX, 0); } else { rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, transform.localEulerAngles.y, 0); } while (Input.GetKeyDown(KeyCode.Space) == true) { Screen.lockCursor = true; } } void Start () { // Make the rigid body not change rotation if (GetComponent<Rigidbody>()) GetComponent<Rigidbody>().freezeRotation = true; } } This code does everything except lock the mouse to the center of the screen. Screen.lockCursor = true; does not work though, since then the camera no longer moves, and the cursor does not allow you to click anything else either.

    Read the article

  • What is the use of Association, Aggregation and Composition (Encapsulation) in Classes

    - by SahilMahajanMj
    I have gone through lots of theories about what is encapsulation and the three techniques of implementing it, which are Association, Aggregation and Composition. What i found is, Encapsulation Encapsulation is the technique of making the fields in a class private and providing access to the fields via public methods. If a field is declared private, it cannot be accessed by anyone outside the class, thereby hiding the fields within the class. For this reason, encapsulation is also referred to as data hiding. Encapsulation can be described as a protective barrier that prevents the code and data being randomly accessed by other code defined outside the class. Access to the data and code is tightly controlled by an interface. The main benefit of encapsulation is the ability to modify our implemented code without breaking the code of others who use our code. With this feature Encapsulation gives maintainability, flexibility and extensibility to our code. Association Association is a relationship where all object have their own lifecycle and there is no owner. Let’s take an example of Teacher and Student. Multiple students can associate with single teacher and single student can associate with multiple teachers but there is no ownership between the objects and both have their own lifecycle. Both can create and delete independently. Aggregation Aggregation is a specialize form of Association where all object have their own lifecycle but there is ownership and child object can not belongs to another parent object. Let’s take an example of Department and teacher. A single teacher can not belongs to multiple departments, but if we delete the department teacher object will not destroy. We can think about “has-a” relationship. Composition Composition is again specialize form of Aggregation and we can call this as a “death” relationship. It is a strong type of Aggregation. Child object dose not have their lifecycle and if parent object deletes all child object will also be deleted. Let’s take again an example of relationship between House and rooms. House can contain multiple rooms there is no independent life of room and any room can not belongs to two different house if we delete the house room will automatically delete. The question is: Now these all are real world examples. I am looking for some description about how to use these techniques in actual class code. I mean what is the point for using three different techniques for encapsulation, How these techniques could be implemented and How to choose which technique is applicable at time.

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >