Search Results

Search found 61830 results on 2474 pages for 'efficient time use'.

Page 347/2474 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • The Start of a Blog

    - by dbradley
    So, here's my new blog up and running, who am I and what am I planning to write here?First off - here's a little about me:I'm a recent graduate from university (coming up to a year ago since I finished) studying Software Engineering on a four year course where the third year was an industrial placement. During the industrial placement I went to work for a company called Adfero in a "Technical Consultant" role as well as a junior "Information Systems Developer". Once I completed my placement I went back to complete my final year but also continued in my developer role 2/3 days a week with the company.Working part time while at uni always seems like a great idea until you get half way through the year. For me the problem was not so much having a lack of time, but rather a lack of interest in the course content having got a chance at working on real projects in a live environment. Most people who have been graduated a little while also find this - when looking back at uni work, it seem to be much more trivial from a problem solving point of view which I found to be true and I found key to uni work to actually be your ability to prove though how you talk about something that you comprehensively understand the basics.After completing uni I then returned full time to Adfero purely in the developer role which is where I've now been for almost a year and have now also taken on the title of "Information Systems Architect" where I'm working on some of the more high level design problems within the products.What I'm wanting to share on this blog is some of the interesting things I've learnt myself over the last year, the things they don't teach you in uni and pretty much anything else I find interesting! My personal favorite areas are text indexing, search and particularly good software engineering design - good design combined with good code makes the first step towards a well-written, maintainable piece of software.Hopefully I'll also be able to share a few of the products I've worked on, the mistake I've made and the software problems I've inherited from previous developers and had to heavily re-factor.

    Read the article

  • Problem re-factoring multiple timer countdown

    - by jowan
    I create my multiple timer countdown from easy or simple script. entire code The problem's happen when i want to add timer countdown again i have to declare variable current_total_second CODE: elapsed_seconds= tampilkan("#time1"); and variable timer who set with setInterval.. timer= setInterval(function() { if (elapsed_seconds != 0){ elapsed_seconds = elapsed_seconds - 1; $('#time1').text(get_elapsed_time_string(elapsed_seconds)) }else{ $('#time1').parent().slideUp('slow', function(){ $(this).find('.post').text("Post has been deleted"); }) $('#time1').parent().slideDown('slow'); clearInterval(timer); } }, 1000); i've already know about re-factoring and try different way but i'm stack to re-factoring this code i want implement flexibelity to it.. when i add more of timer countdown.. script do it automatically or dynamically without i have to add a bunch of code.. and the code become clear and more efficient.. Thanks in Advance

    Read the article

  • TechEd North America 2012–Day 3 #msTechEd #teched

    - by Marco Russo (SQLBI)
    Yesterday I spent the longest day at this TechEd: we talked with many people at Community Night until 9pm and I have to say that just a few months after Analysis Services 2012 has been released, there are many people already using it. And the adoption of PowerPivot is starting to be quite large. Many new ideas and challenging coming from several different real world scenarios. I was tired but really happy. Alberto presented his Many-to-Many Relationships in BISM Tabular session that was in the same time slot of the BI Power Hour. For this reason, very few people attended Alberto’s session so I think many will watch the recorded session (it should be available within a few days). So what about today? I’ll spend some time at Technical Learning Center area (full schedule here) but the most important event today will be the Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone).  Why you should attend? Mainly because you will see live demo over 4 billion rows table with many-to-many relationships involved in complex queries. But for those of you that think this is not enough to attend a 15 minute funny session, well, we’ll give away some 8GB USB Memory Keys to those of you that will guess exact response time of queries before execution. Convinced? Join us at 11:15am and don’t be late, the session will finish at 11:30am! After that, we’ll run a book signing session at the Bookstore at 12:30pm and I will be in the Technical Learning Center area at 3:00pm until 5:00pm. See you there!

    Read the article

  • Constituent Experience Counts In Public Sector

    - by Michael Seback
      Businesses and government organizations are operating in an era of the empowered customer where service  and communication channels are challenged every day.  Consumers in the private sector have high expectations from purchasing gifts online, reading reviews on social sites, and expecting the companies they do business with to know and reward them.   In the Public Sector, constituents also expect government organizations to provide consistent and timely service across agencies and touch points.  Examples include requesting critical city services, applying for social assistance or reviewing insurance plans for a health insurance exchange. If an individual does not receive the services they need at the right time and place, it can create a dire situation – involving housing, food or healthcare assistance. Government organizations need to deliver a fast, reliable and personalized experience to constituents. Look at a few recent statistics from a Government focused survey: How do you define good customer service? 70 % improved services, 48% shortest time to provide information, 44% shortest time to resolve complaints What are ways/opportunities to improve customer service? 69% increased collaboration across agencies and 41% increased customer service channels Are you using data collected to make informed decisionsto improve customer service efforts? 39% data collection is limited, not used to improve decision making Source: Re-Imagining Customer Service in Government, 2012 Click here to see the highlights.  Would you like to get started – read Eight Steps to great constituent experiences for government.

    Read the article

  • Retrofit WebForms with ASP.NET MVC - NoVa Code Camp 2010.2 Demo

    - by Soe Tun
    Thank you to everyone who attended my Retrofit WebForms with ASP.NET MVC session at NoVa Code 2010.2. It was a fun event for me and I hope you had a great time and learned something from it. I wish I had more time to go over some more important topics in more detail. I *promise* I will be writing blog post series about it since I'll have some vacation time during the December holidays to cover some topics that I didn't get to cover in detail.   Please note that the ".bak" file included in the zip file is a SQL Server Database backup file. You have to restore it on your Database server to run it with the source code demo.   Please feel free to ask me about the demo project through Twitter or from this blog post. I'll be glad to help you out. If you want me to give this presentation at your .NET User Group, please let me know and I'll be honored to speak there also.   Again, thank you all and have a great holiday season. Here is the download link to my Demo project Zip file with the PowerPoint presentation in it. Please let me know if the link doesn't work.

    Read the article

  • First ATMs programming language

    - by revo
    First ATMs performed tasks like a cash dispenser, they were offline machines which worked with punch cards impregnated with Carbon and a 6-digit PIN code. Maximum withdrawal with a card was 10 pounds and each one was a one-time use card - ATM swallowed cards! The first ATM was installed in London in the year 1967, as I looked at time line of programming languages, there were many programming languages made before that decade. I don't know about the hardware neither, but in which programming language it was written? *I didn't find a detailed biography of John Shepherd-Barron (ATM inventor at 70s) Update I found this picture, which is taken from a newspaper back to the year 1972 in Iran. Translated PS : Shows Mr. Rad-lon (if spelled correctly), The manager of Barros (if spelled correctly) International Educational Institute in United Kingdom at the right, and Mr. Jim Sutherland - Expert of Computer Kiosks. In the rest of the text I found on this paper, these kind of ATMs which called "Automated Computer Kiosk" were advertised with this: Mr. Rad-lon (if spelled correctly) puts his card to one specific location of Automated Computer Kiosk and after 10 seconds he withdraws his cash. Two more questions are: 1- How those ATMs were so fast? (withdrawal in 10 seconds in that year) 2- I didn't find any text on Internet which state about "Automated Computer Kiosk", Is it valid or were they being called Computer in that time?

    Read the article

  • Converting large files in python

    - by Cenoc
    I have a few files that are ~64GB in size that I think I would like to convert to hdf5 format. I was wondering what the best approach for doing so would be? Reading line-by-line seems to take more than 4 hours, so I was thinking of using multiprocessing in sequence, but was hoping for some direction on what would be the most efficient way without resorting to hadoop. Any help would be very much appreciated. (and thank you in advance) EDIT: Right now I'm just doing a for line in fd: approach. After that right now I just check to make sure I'm picking out the right sort of data, which is very short; I'm not writing anywhere, and it's taking around 4 hours to complete with that. I can't read blocks of data because the blocks in this weird file format I'm reading are not standard, it switches between three different sizes... and you can only tell which by reading the first few characters of the block.

    Read the article

  • XNA on the TechNet Wiki

    - by Michael B. McLaughlin
    Many months ago I came across an interesting Microsoft website, the TechNet Wiki, when I was looking for information about something that I can’t even remember anymore. I noticed at the time that its section on gaming technologies was sparse and even exchanged a few emails with one of the friendly Microsoft employees who contributes there regularly about some ideas I had for the site. I seem to recall mentioning my intentions to add some articles on XNA when I found the time but between one thing and another it seemed like I was busy from the end of last Summer straight through ‘til now. Yesterday I came across the TechNet Wiki link in my miscellaneous links collection and remembered my intentions many months ago. I decided that adding XNA pages to it would make a nice project to work on while taking breaks from my other projects. So I wrote my first two articles for it: XNA Framework Overview and Content Pipeline Overview. I hope to add more in the coming days and weeks. I’d be delighted if some of my fellow XNA enthusiasts out there joined in, time permitting. Anyone else who’d like to add a page or two on a topic area you’re familiar with, this seems like a great opportunity to contribute to the community and help build a nice knowledge base to benefit all of us who are always interested in learning something new!

    Read the article

  • Instapaper Updates; Sports Native Social Media Sharing, Browsing, and More

    - by Jason Fitzpatrick
    Popular web content manager Instantpaper has updated to version 3.0 and brings a host of new features like native support for social media sharing, a recommendation system, in-app web browsing and more. Last year we shared a detailed guide with you on how to use Instapaper to save content from the web to your iOS device for later reading–definitely check it out if you’re unfamiliar with Instapaper. Some of the new features in Instapaper 3.0 include a social recommendation system where you can follow other Instapaper users and see the articles they are liking/sharing, native support for sharing to Twitter, Facebook, and other social media systems, smart rotation lock on the display, and more efficient article downloading and storage. Check out the link below to read a full rundown of the new features on the Instapaper blog. Instapaper 3.0 Is Here! [Instapaper via O'Reilly Radar] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • Matlab cell length

    - by AP
    Ok I seem to have got the most of the problem solved, I just need an expert eye to pick my error as I am stuck. I have a file of length [125 X 27] and I want to convert it to a file of length [144 x 27]. Now, I want to replace the missing files (time stamps) rows of zeros. (ideally its a 10 min daily average thus should have file length of 144) Here is the code I am using: fid = fopen('test.csv', 'rt'); data = textscan(fid, ['%s' repmat('%f',1,27)], 'HeaderLines', 1, 'Delimiter', ','); fclose(fid); %//Make time a datenum of the first column time = datenum(data{1} , 'mm/dd/yyyy HH:MM') %//Find the difference in minutes from each row timeDiff = round(diff(datenum(time)*(24*60))) %//the rest of the data data = cell2mat(data(2:28)); newdata=zeros(144,27); for n=1:length(timeDiff) if timeDiff(n)==10 newdata(n,:)=data(n,:); newdata(n+1,:)=data(n+1,:); else p=timeDiff(n)/10 n=n+p; end end Can somebody please help me to find the error inside my for loop. My output file seems to miss few timestamped values. %*********************************************************************************************************** Can somebody help me to figure out the uiget to read the above file?? i am replacing fid = fopen('test.csv', 'rt'); data = textscan(fid, ['%s' repmat('%f',1,27)], 'HeaderLines', 1, 'Delimiter', ','); fclose(fid); With [c,pathc]=uigetfile({'*.txt'},'Select the file','C:\data'); file=[pathc c]; file= textscan(c, ['%s' repmat('%f',1,27)], 'HeaderLines', 1, 'Delimiter', ','); And its not working % NEW ADDITION to old question p = 1; %index into destination for n = 1:length(timeDiff) % if timeDiff(n) == 10 % newfile(p,:) = file(n,:); % newfile(p+1,:)=file(n+1,:); % p = p + 1; % else % p = p + (timeDiff(n)/10); % end q=cumsum(timeDiff(n)/10); if q==1 newfile(p,:)=file(n,:); p=p+1; else p = p + (timeDiff(n)/10); end end xlswrite('testnewws11.xls',newfile); even with the cumsum command this code fails when my file has 1,2 time stamps in middle of long missing ones example 8/16/2009 0:00 5.34 8/16/2009 0:10 3.23 8/16/2009 0:20 2.23 8/16/2009 0:30 1.23 8/16/2009 0:50 70 8/16/2009 2:00 5.23 8/16/2009 2:20 544 8/16/2009 2:30 42.23 8/16/2009 3:00 71.23 8/16/2009 3:10 3.23 My output looks like 5.34 3.23 2.23 0 0 0 0 0 0 0 0 0 5.23 544. 42.23 0 0 0 3.23 Any ideas?

    Read the article

  • Where have I been for the last month?

    - by MarkPearl
    So, I have been pretty quiet for the last month or so. True, it has been holiday time and I went to Cape Town for a stunning week of sunshine and blue skies, but the second I got back home I spent the remainder of my holiday on my pc viewing tutorials on www.tekpub.com Craig Shoemaker, who I got in contact with because of his podcast, sent me a 1 month free subscription to the site and it has been really appreciated. I have done a lot of WPF programming in the past, but not any asp.net stuff and so I used the time to get a peek at asp.net mvc2 as well as a bunch of other technologies. I just wished I had more spare time to do the rest of the videos. While I didn’t understand all of what was being shown on the asp.net stuff (it required previous asp.net expertise), the site was a really good jump start to someone wanting to learn a new technology and broaden the horizons and I would highly recommend it, My only gripe is that in South Africa we have limited bandwidth and bandwidth speeds and so I spent a lot of my monthly bandwidth on the site and had to top up with my ISP several times because of the high quality video captures that the site did. I would have preferred to download the video’s, but apparently that is only available to people who have the yearly subscription fee. Other than that, great site and thanks a ton Craig!

    Read the article

  • Using AWS or Azure, what to do about emails?

    - by Paul
    I'm coming from a background of paying a hosting company X amount per month for a server. This server comes with IIS, WebsitePanel and Smartermail all bundled together. When I create a new domain using WebsitePanel it automatically creates my email account. All I then need to do is configure my DNS to point to the server. I've decided that it is more cost efficient to move to AWS / Azure. Has anyone come from a similar background and moved onto a cloud system? I'd be interested to know what you did regarding emails. So far, these are the suggestions I've seen: Use Google Apps for each domain Use something like Elastic Email to sent out emails Launch a new instance and host an email server on that The first option seems like quite a lot of manual configuration, the second one works good with outgoing emails but what about receiving? Option 3 would make it less cost effective. What is your experience?

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • Sprite/Tile Sheets Vs Single Textures

    - by Reanimation
    I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C++ and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite/tile sheet making it 3000 x 2000 pixels seems the number of textures/tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Google I/O 2012 - Breaking the JavaScript Speed Limit with V8

    Google I/O 2012 - Breaking the JavaScript Speed Limit with V8 Daniel Clifford Are you are interested in making JavaScript run blazingly fast in Chrome? This talk takes a look under the hood in V8 to help you identify how to optimize your JavaScript code. We'll show you how to leverage V8's sampling profiler to eliminate performance bottlenecks and optimize JavaScript programs, and we'll expose how V8 uses hidden classes and runtime type feedback to generate efficient JIT code. Attendees will leave the session with solid optimization guidelines for their JavaScript app and a good understanding on how to best use performance tools and JavaScript idioms to maximize the performance of their application with V8. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 3049 113 ratings Time: 47:35 More in Science & Technology

    Read the article

  • Selenium-Nunit Program Structure

    - by Jacobm001
    My office has a suite of web reporting engines written in VB. All in all there's about 300 reports with varying displays depending on the data being input into them. I'm trying to establish an efficient way to deal with such a major diversity, but am struggling with creating a system that won't be a nightmare to code/maintain. What I've considered doing is: On program launch, read the steps required for each test page. This may have multiple tests for the same page with varying inputs. Write each iteration of the test in XML file under $env:temp/testname Use the TestCaseSource attribute of Nunit to funnel every related xml file as a source. My major stumbling block has been how to get that data to the Nunit framework. Is Nunit really appropriate for what I'm trying to do, or is it too static?

    Read the article

  • Is it possible to auto-mount sshfs

    - by Mark D
    Is it possible to auto-mount a remote FS using sshfs upon instantiating a proper VPN connection? Allow me to explain the scenario, I'm working remotely, to do that it helps if I can mount my home dir from a server in the office. To do that I need to vpn in. So within network manager I select the relevant VPN and connect. It connects but now I have to drop to the command line and mount my home dir on several machines. If I forget to do one machine my local dev environment isn't as efficient. I suppose I could write a quick bash script to do this but I'd rather get it running automatically when I connect.

    Read the article

  • JavaOne Countdown, Are you ready?

    - by Angela Caicedo
    This is a great time of the year!  Not only does the weather start cooling down a bit, but it's time to get ready for JavaOne 2012.  It feels so long since my last JavaOne (last year I missed it because I was on a mom duty), so this year I couldn't be happier to be this close to the action again.  Have you ever been at JavaOne?  There are a million great reasons to love JavaOne, and the most important for me is the atmosphere of the conference: The Java community is there, and Java is in the air! This year we have more than 450 sessions, and there are HOLs (Hands on labs) to get your hands dirty with code.  In addition, there will be very cool demos, an exhibition hall. and a DEMOground.  During the whole time, you will have the opportunity to interact with the speakers, discuss topics and concerns, and even have a drink! Oh yes, I almost forgot, there will be lots of fun even apart from the technology!  For example there will be a Geek Bike Ride, a Thirsty Bear party, and the Appreciation Party with Pearl Jam and Kings of Leon.  How can this get any better! So, are you ready yet?  Have you registered?  If not, just follow this "Register for JavaOne" link and we'll see you there! P.S.  Little known fact: If you are a student you can get your pass for free!!!

    Read the article

  • If your algorithm is correct, does it matter how long it took you to write it?

    - by John Isaacks
    I recently found out that Facebook had a programming challenge that if completed correctly you automatically get a phone interview. There is a sample challenge that asks you to write an algorithm that can solve a Tower of Hanoi type problem. Given a number of pegs and discs, an initial and final configuration; Your algorithm must determine the fewest steps possible to get to the final configuration and output the steps. This sample challenge gives you a 45 minute time limit but allows you to still test your code to see if it passes once your time limit expires. I did not know of any cute math solution that could solve it, and I didn't want to look for one since I think that would be cheating. So I tried to solve the challenge the best I could on my own. I was able to make an algorithm that worked and passed. However, it took me over 4 hours to make, much longer than the 45 minute requirement. Since it took me so much longer than the allotted time, I have not attempted the actual challenge. This got me wondering though, in reality does it really matter that it took me that long? I mean is this a sign that I will not be able to get a job at a place like this (not just Facebook, but Google, Fog Creek, etc.) and need to lower my aspirations, or does the fact that I actually passed on my first attempt even though it took too long be taken as good?

    Read the article

  • What should I recommend a small company looking for C# developers

    - by Coder
    Here is the issue. I am a senior developer, and one of the start-ups I designed the system (management system/database/web) a long time ago, have grown and need software updates. I have left their system to another developer long time ago, but apparently he has left the job, and so they are asking me if I can suggest them where to find a new one. The problem is that the company has no clue that the IT is not cheap. They expect multiple features to be added for 40$, so that's an issue. Actually one of the reasons why I left the project when I did. Lots of expectations, little pay, also I know those people outside work, so I decided to avoided stressing the nonwork-relationships and left the project gracefully. Today they asked me for an advice, and I told them that the feature list they want is probably going to cost some if they'll get a senior developer for the job. So I guess their best bet is to find someone who loves coding and has just finished the school. Which would give someone a chance to code for money which is good for a student, and at the same time, allow the student to get some hands on experience. Then again, the system is not exactly 20 line console program, there is an MSSQL database, ASP.NET web page and content management system with all the AJAX stuff and some other things. So student straight out of school could have some problems with that. But, I thought about the issue some more, and I think that junior developer is a tricky deal, without mentoring, he can either screw up royally, or just do what's asked. Also, it seems no one is coming to interviews at all, which is weird, or maybe not. What should I suggest them?

    Read the article

  • In regards to applet games and UDP

    - by Tom Steinberg
    I've got about a year in Java experience, and would like to set up a server and client for an applet game. However, there doesn't appear to be any tutorials out there on anything like I want to use. I would the server to be able to store an array of x and y coordinates with a player name somehow associated to them, and send them to multiple clients in a short time span. I would like the client implemented in the applet, and be able to request any player's position data. I'd like to use UDP, because it seems to be the best option for efficient (if less reliable) transmission of data. If anyone could give me some pointers on how to do such a project, or point me to an appropriate tutorial, I'd certainly appreciate it.

    Read the article

  • Good resources for learning about graphics hardware

    - by Ken
    I'm looking for some good learning resources for graphics hardware (and associated low level software). Basically I want to learn more about what goes on underneath the opengl/direcx API layers in terms of how things are implemented. I familiar with what happens in principle during the various stages of the rendering pipeline (viewing, projection, clipping, rasterization etc). My goal is to be able to make better and more informed decisions about tradeoffs and potential optimisations when graphics/shader programming with respect to the following kinds of issues; batching view culling occlusions draw order avoiding state changes triangles vs pointsprites texture sampling etc Basically whatever the graphics programmer needs to know about modern graphics hardware in order to become more effective. I'm not really looking for specific optimisation techniques, rather I need more general knowledge so that I will naturally write more efficient code.

    Read the article

  • Kernel panic when booting from USB

    - by maaartinus
    I downloaded ubuntu-11.04-desktop-amd64.iso and used Universal-USB-Installer-1.8.6.3.exe to format my USB stick und put the ISO on it. When I tried to install from it, I've got a kernel panic just like here, except for the version number (mine was 2.6.38-8-generic #42-ubuntu). My ISO image seems to work, as I installed it into a VMWare player without problems. Booting Linux from USB works surely too, as I did it some time ago with an older Ubuntu version. I can imagine things to try out, e.g., write the image again, try another version, pray, look for patches, etc. However, I'm looking for a time-efficient solution, something what most probably works. An advice of the sort "wait two weeks until it's surely fixed, then download again" is acceptable, I'm determined to switch to Linux, but it can wait a bit.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >