Search Results

Search found 9067 results on 363 pages for 'big fizzy'.

Page 177/363 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.”

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • C++ and function pointers assessment: lack of inspiration

    - by OlivierDofus
    I've got an assessment to give to my students. It's about C++ and function pointers. Their skill is middle: it the first year of a programming school after bachelor. To give you something precise, here's a sample of a solution of one of 3 exercices they had to do in 30 minutes (the question was: "here's a version of a code that could be written with function pointers, write down the same thing but with function pointers"): typedef void (*fcPtr) (istream &); fcPtr ArrayFct [] = { Delete , Insert, Swap, Move }; void HandleCmd (const string && Cmd) { string AvalaibleCommands ("DISM"); string::size_type Pos; istringstream Flux (Cmd); char CodeOp; Flux >> CodeOp; Pos = AvalaibleCommands.find (toupper (CodeOp)); if (Pos != string::npos) { ArrayFct [Pos](Flux); } } Any idea where I could find some inspiration? Some of the students have understood the principles, even though it's very hard for them to write C++ code. I know them, I know they're clever, and I'm pretty sure they should be very good project managers. So, writing C++ code is not that important after all. Understanding is the most important part (IMHO). I'm wondering about maybe break the habits, and give half of the questions about the principle, or even better, give some sample in other language and ask them why it's better to use function pointers instead of classical programming (usually a big switch case). Any idea where I could look? Find some inspiration?

    Read the article

  • The spork/platypus average: shameless self promotion

    - by Roger Hart
    This is the video of presentation I gave at UA Europe and TCUK this year. The actual sub-title was "Content strategy at Red Gate Software", but this heading feels more honest. For anybody who missed it, or is just vaguely interested, here's a link to me talking about de-suckifying the web. You can find the slideshare deck here, too* Watching it back is more than a little embarrassing, and makes me really, really want to do a follow up, so I can do three things: explain the rest of the big web project, now we've done it give some data on the outcome of the content review make a grovelling apology to our marketing guys, who I've been unfairly mean to in a childish effort to look cool There are a whole bunch of other TCUK presentations online, too. You can find them all here: http://tiny.cc/tcuk10_videos I'd particularly recommend Chris Atherton's: "Everything you always wanted to know about psychology and technical communication" - it's full of cool stuff. You should probably also watch David Black's opening keynote, which managed to make my hour of precocious grandstanding look measured, meek, and helpful. He actually makes some interesting points, but you'd basically have to ship Richard Dawkins off to Utah, if you wanted to go further out of your way to aggravate your audience. It does give an engaging account of running a large tech comms project, and raise some questions about how we propose to understand a world where increasing amounts of our stuff gets done by increasingly many increasingly complicated tissues of APIs. Well, sort of. That's what all the notes I made were about, anyway.   *Slideshare ate my fonts. Just so we're clear on this: I'd never use badly-kerned Arial in a presentation. Don't worry.

    Read the article

  • Meet The MySQL Experts Podcast: MySQL Utilities

    - by Wei-Chen Chiu
    Managing a MySQL database server can become a full time job. In many occasions, one MySQL DBA needs to manage multiple, even tens of, MySQL servers, and tools that bundle a set of related tasks into a common utility can be a big time saver, allowing you spend more time improving performance and less time executing repeating tasks. While there are several such utility libraries to choose, it is often the case that you need to customize them to your needs. The MySQL Utilities library is the answer to that need. It is open source so you can modify and expand it as you see fit. In the latest episode of the "Meet the MySQL Experts" podcast series, Chuck Bell, Sr. MySQL Software Developer at Oracle, introduces a variety of recently released MySQL Utilities, and how DBAs can save significant time using the utilities. Listen to the podcast and learn the highlights in 10 minutes. If you want to gain further details, attend the on-demand webinar for a more complete introduction, including: Use cases for each utility How to group utilities for even more usability How to modify utilities for your needs How to develop and contribute new utilities  Enjoy!

    Read the article

  • Oracle Partner Days and Oracle Days are coming to a city in EMEA near you!

    - by Javier Puerta
    Oracle Partner Days A new round of Oracle Partner Days is coming to a large number of European cities. These events are exclusive for Oracle partners and will deliver to you real Business return on your OPN membership.You will hear the business opportunities coming from the adoption of the entire Oracle stack, the latest products value propositions and related sales strategy and be able to connect directly with Oracle executives and find new business opportunities with other partners in your region.The EMEA Oracle Partner Days are Local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. The first city will be Frankfurt, on Oct. 29. Check the full list to find an Oracle Partner Day in a city near you. Oracle Days Oracle Days will be hosted after Oracle OpenWorld across EMEA, along October and November. By attending an Oracle Day, customers and partners can: Learn about how to leverage the power of the Oracle stack, by hearing customer case studies about successful business transformation, and by following cross-stack solution tracks within the agenda Discuss key issues for business and IT executives in cloud, big data, social, and mobile solutions, and network with peers who are facing the same challenges Meet Oracle experts and watch live demos of new products Get the latest news from Oracle OpenWorld. See full calendar and cities here

    Read the article

  • November eSTEP nyhedsbrev til hardware partner presales

    - by user12842157
    Kære partner,Vi vil hermed gøre dig opmærksom på at November versionen af vores eSTEP nyhedsbrev nu kan findes på eSTEP portalen. Du finder omtalte nyhedsbrev på vores portal under eSTEP News ---> Latest Newsletter. For at få access til portalen skal du bruge linket nederst i denne blog. Nyt fra Oracle: Reflektioner over Oracle OpenWorld, Oracle Buys GoAhead Det tekniske hjørne: T4 processor, SPARC SuperCluster T4-4, Pillar Axiom 600,  Oracle ZFS Appliance,  Hybrid Columnar Compression Support for ZFS Storage Appliances and Pillar Axiom Storage Systems, Oracle Exalytics In-memory Machine, Oracle Big Data Appliance, Oracle Database Express Edition 11g Release 2(Oracle Database XE), Oracle Public Cloud Træning og events: eSTEP Events Schedule, Recently Delivered TechCasts, Delivered Campaigns in 2011, Q&A covering Oracle Database Appliance How to ...: Oracle Server Finder - choose the system that is right for your, Power calculator for all the HW, SW documentation search , TO YOUR ATTENTION - Remarks to new configuration-options for 7120 URL: http://launch.oracle.com/PIN: er sendt til vores kontaktliste, ellers henvend dig til : [email protected] versioner af dette nyhedsbrev kan findes på portalen under "Archived Newsletters", mere information findes også under Events, Download og Links.Vi værdsætter enhver feed back på indholdet på portalen og anden information vi leverer.Med venlig hilsenPartner HW Enablement EMEA

    Read the article

  • Masters or Second Bachelors Degree..or neither

    - by drD
    I have a degree in Business Administration, because at the time I didn't know what I wanted to do. I have been interested in programming for the past 2 years and have taken some action to self-teach. My experience/ knowledge base is limited to the following: -Read Kochan's Programming in C -Read IOS and Objective-C from the Big Nerd Ranch series -Obtained a C++ at NYU - thought it would be a good way to start to get a grasp on OO & design I would like to continue developing my skills, but most of all, re-orient how I am perceived as a professional. I am fully aware of how much a novice to this subject and would greatly appreciate any guidance anyone could give me. I currently have a job so full-time is not an option My goal is to become a software/ applications developer My questions are: -Should i take up a second bachelors in computer science? or a masters? or continue taking professional certificate programs (how are these viewed?) -If masters in computer science, would that make sense, if I dont have the formal foundation? (being a chief without ever being an Indian) -General advice for a novice to develop skill Thank You in advanced for helping me out.

    Read the article

  • Reasons NOT to open source not-for-profit code?

    - by naught101
    I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on SE, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • How do you stay productive when dealing with extremely badly written code?

    - by gaearon
    I don't have much experience in working in software industry, being self-taught and having participated in open source before deciding to take a job. Now that I work for money, I also have to deal with some unpleasant stuff, which is normal of course. Recently I was assigned to add logging to a large SharePoint project which is written by some programmer who obviously was learning to code on the job. After 2 years of collaboration, the client switched to our company, but the damage was done, and now somehow I need to maintain this code. Not that the code was too hard to read. Despite problems - each project has one class with several copy-pasted methods, enormous if nestings, Systems Hungarian, undisposed connections — it's still readable. However, I found myself absolutely unproductive despite working on something as simple as adding logging. Basically, I just need to go through the code step by step and add some trace calls. However, the idiocy of the code is so annoying that I get tired within 10 minutes of starting. In the beginning, I used to add using constructs, reduce nesting by reversing if's, rename the variables to readable names—but the project is large, and eventually I gave up. I know this is not the task I should be doing, but at least reducing the mess gave me some kind of psychological reward so I could keep going. Now the trick stopped working, and I still have 60% of my work to do. I started having headaches after work, and I no longer get the feeling of satisfaction I used to get - which would usually allow me to code for 10 hours straight and still feel fresh. This is not just one big rant, for I really do have an actual question: Is there a way to stay productive and not to fight the windmills? Is there some kind of psychological trick to stay focused on the task, instead of thinking “How stupid is that?” each time I see another clever trick by the previous programmer? The problem with adding logging is that I actually have to understand what the code does, and doing so hurts my brain in an unpleasant fashion.

    Read the article

  • Automate #include refactoring in C++ [on hold]

    - by Mikhail
    I have a big project with hundreds of files. And as it often happens to C++ projects, #include directives are in messed up. I want to refactor them to increase clarity, decrease compilation time and simplify analysis. For each .h file I want to make sure that: It have #include directives only for types it is using But it have only forward declarations of types that are used as T* or T& For each .cpp file I want to make sure that: It have #include directives only for types it is using and not already included by another headers (no indirect includes when possible) I'm looking for a tool which will help me to automate this refactoring. For now I only know of tools that helps to remove redundant includes, they are many: PC-lint include-what-you-use cppclean ProFactor IncludeManager But I know of no tools to help me to move necessary includes in .h files or replace includes with forward declarations. Any ideas? Tools for Windows and Visual Studio are preferred. Update. Considered to be off-topic. Please, follow the link on Software Recommendations http://softwarerecs.stackexchange.com/q/4461/3331

    Read the article

  • Four Proven Advantages of Online Learning | Outside Cost, Accessibility or Flexibility

    - by Mohit Phogat
    Coursera believes that online courses complement and supplement traditional education (versus a common misconception online will “replace” traditional.) Our research shows that Coursera’s platform, when used concurrently with a traditional classroom setup, is ideal for “blended learning” (i.e., students watch lectures pre-class, then class-time focuses on interactive work and discussion.) Additionally, we agree with Brad Zomick of SkilledUp—an online learning aggregator—who acknowledges an online course “isn’t an alternative at all but rather a different path with its own rewards.” The advantages of Coursera and our apps for mobile were straightforward and conspicuous from the start: we’re free, open, and flexible to learners’ unique needs and style. Over the past two years, however, the evidence proves there are many more tangible benefits to open, online learning. In SkilledUp’s “The Advantages of Online Courses [Infographic]”–crafted from findings of leading educational research–four observations stand out from the overt characteristics: Speedier Learning - “Research shows that online students achieve same or better learning results in about half the time as those in traditional courses” More Active, Engaged & Motivated - Learners thrive “when working with coursework that is challenging but within their capacity to master.” Tangible Skill Building - with an “improved attitude toward learning” Better Teaching Quality - Courses are taught by experts, with various multimedia and cutting-edge technology, and “are usually better organized than traditional courses” This is only the beginning, Courserians! Everyday we hear your incredible stories on how open online courses enrich your lives and enhance your careers. Meanwhile we study the steady stream of scientific, big-data research proving their worth on a large-scale (such as UPenn’s latest research on the welcomed diversity in Coursera-hosted Wharton MBA courses.) Our motto “Learning without Limits” reminds us that open, online courses give tremendous opportunity to those that might not otherwise have access (or time, or money) to study at a high-caliber institution. Source: Coursera

    Read the article

  • Ubuntu 11.10 won't boot on Dell XPS 8300

    - by Phil Gorman
    I have a brand new Dell Studio XPS 8300 desktop with 17-2600 cpu, H67 chipset, 8GB DDR3, 2 1TB HDDs in mirrored RAID, and AMD Radeon 6770. Dell doesn't support Ubuntu here in Australia so it came with Windows 7 and Windows software. Yes I had to pay for an O.S. and software I didn't want to get hardware I did want, all at a greatly inflated price. It's not all beer and skittles in the land of Oz. I changed boot priorities in the BIOS to DVD and ran Ubuntu 11.10 64bit from the ISO with NOMODESET. The installation reformatted all partitions to rid me of the dreaded Windows. All was well until until reboot. The BIOS does its thing, then its "The Black Screen of Death" with a blinking cursor; no boot screen, no Grub, no keyboard, no mouse. I've searched Dell and Ubuntu forums in vain. Can you help? I would be really grateful for any advice which can help turn my big expensive paperweight into a really useful machine. Thank you in anticipation kind people. Phil

    Read the article

  • Web to Print system - which client side technology should I use

    - by mr null
    I'm getting started to develop Web to Print system. My main concern is which client-side techology should I use: jQuery/CSS Flex/ActionScript or something else? For now, idea is that user/customer choose Product eg: Business card Attributes: Dimensions,Paper type, ... etc Template or blank Adjust product (editor) Preview & order Output should be PDF 300dpi. My main issue is: adding/adjusting text in editor. (font size, font family...). Because application should be cross browser. And I think that 10px Arial can't be the same in Firefox 5 and IE 8. It must be pixel perfect in every browser. If somebody order $100 of business cards, and text is different that he/she saw when ordering - that is a big problem. So, Flash platform should be the answer. But, as I see it's dying technology, 2012 is here - HTML5 is replacing Flash rapidly. I hope that you understand me, so every guideline or few smart words would be very appreciating.

    Read the article

  • Internship in License Contract Management

    - by cristian.condurache(at)oracle.com
    Hi Everyone, My name is Luca. I am an intern in the License Contract Management team in Italy. I have studied Economics and Business in Pescara and finished my Master’s Degree in July 2009. After a short work experience near my home town I decided to look for a job in an International Company. I got in touch with Oracle in January 2010. I had a telephone interview and then a face-to-face interview. On a cold and grey morning, I arrived in Milan....my first impression was fantastic....a big modern building with wide TVs everywhere. I was a little nervous but very excited. I understood this could be a great opportunity... The interview went well and I started to work in March. After a training period I was quickly involved in the closing of the last quarter of the fiscal year - of which May is the last month at Oracle. Working as a License Contract Manager is a real challenge for a fresh graduate. It involves thoroughly understanding the Oracle Policies and Practices with regards to License Contracts. In my experience, especially in May, I learnt to work under high pressure, within time constrains, and to keep up with constant changes. In this period I also had the opportunity to be involved in different negotiations, being directly in contact with the customers. This helped me to develop my relational skills during complex transactions. Looking back at the nine months at Oracle I can say I have a better understanding of the IT world. It is a complex environment that changes continously, offering new challenges to learn from everytime. If you have any questions related to this article feel free to contact [email protected]. You can find our job opportunities via http://campus.oracle.com. Technorati Tags: License Contract Management,oppotunity,Oracle Policies,internship

    Read the article

  • September OTN Member Offers

    - by Cassandra Clark - OTN
    Oracle OpenWolrd and JavaOne are coming....so the OTN team is knee deep in planning the OTN Lounges that will be at each event this year (more info in another post soon), but we managed to work with our partners to offer a nice BIG list of NEW offers for September.  Visit Oracle Technology Network Member Discount page for codes and links to these great offers! Oracle Press Oracle Technology Network members get 40% off the newest Oracle Press title, Oracle Exalogic Elastic Cloud Handbook by Tom Plunkett! Mike Murach & Associates, Inc. Get 30% off and sample chapter of Murach’s SQL Server 2012 for Developers by Bryan Syverson and Joel Murach Manning - 41% off titles below and sample chapter of each. Making Java Groovy OCA Java SE 7 Programmer I Certification Guide Apress - Get 30% off on apress.com on Java 7 Recipes A Problem-Solution Approach Safari Books Online - OTN members get 30 days of free access + 20% off unlimited access to Safari Books Online for 6 months. Safari Books Online offers subscription access to more than 24,000 books and training videos about technology, digital media, business management and professional development from leading publishers such as Oracle Press, O'Reilly Media, Que, Addison-Wesley, Wrox, Cisco Press, Microsoft Press, McGraw Hill, Wiley, Apress, Adobe Press and many others. Already a customer? Come see us at Oracle OpenWorld (booth 537) or JavaOne (5110) and mention this to get a shirt!

    Read the article

  • Microsoft MVP Again for 2011

    - by Vincent Maverick Durano
    Normal 0 false false false EN-PH X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} I just got a great news from Microsoft that I’m re-awarded as Microsoft MVP (Most Valuable Professional) for this year.  This is my 3rd year in a row as an MVP and  I’m of course very happy about and feel honored by it. Woohoo!! Here’s the Proof =} Dear Vincent Maverick Durano, Congratulations! We are pleased to present you with the 2011 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in ASP.NET/IIS technical communities during the past year. The Microsoft MVP Award provides us the unique opportunity to celebrate and honor your significant contributions and say "Thank you for your technical leadership."     BIG thanks to Microsoft, my MVP Lead Lilian Quek, readers, and everyone who has supported me!!!

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Where to store global enterprise properties?

    - by shylynx
    I'm faced with a crowd of java applications, which need different global enterprise wide properties for operation, for example: hostname of the central RDBMS, hostname and location of the central self-service portal, host location of central LDAP, host location of central mail server etc. Formally we build each application with a properties file, where all this properties are definied. But that's a very bad solution, because if the hostname of the mail server changes, we need to change the properties files for each application and deploy all applications again. Our idea is to centralize this properties, so that each application can ask for each property at runtime, for example: Idea: Put the properties file to an easy accessible file share. So if we need to change a property, each application uses the new properties. Idea: Put the properties to database. Main disadvantage: we need a dependency to database client libraries for each application. Idea: Put all applications into one big application server, that provides system properties for each application. Main disadvantage: needs deployment of each application to one application server. But that isn't a realistic scenario. Idea: Webservice that provides global enterprise wide properties. Main disadvantage: not very secure, because some properties are passwords or user credentials. What other alternatives are recommended? What is state of the art?

    Read the article

  • Strange behavior on Gnome after update on from 13.04 to 13.10

    - by WayneBrady
    I made an automatic update on my Ubuntu 13.10 (from 13.04) system today. Since this point of time, I am in really big trouble. I use a VNC server with Gnome classic, after the update my Gnome was gone. So i tried everything. Checked the xstartup file of vncserver. Right now I reached a point where I can't find the answer. The logfile says that gnome-session-fallback is missing, even directly after I installed it with apt-get (tried it serveral times, installing, uninstalling and so on). I have no chance to use it as you can see in this terminal copy: root@ip-xxx:~/.vnc# apt-get install gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: gnome-session-fallback 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/2,914 B of archives. After this operation, 247 kB of additional disk space will be used. Selecting previously unselected package gnome-session-fallback. (Reading database ... 210977 files and directories currently installed.) Unpacking gnome-session-fallback (from .../gnome-session-fallback_1%3a3.6.2-0ubuntu15_all.deb) ... Setting up gnome-session-fallback (1:3.6.2-0ubuntu15) ... root@ip-xxx:~/.vnc# gnome-session-fallback The program 'gnome-session-fallback' is currently not installed. You can install it by typing: apt-get install gnome-session-fallback If you have some idea, please give me a hint... Thank you!

    Read the article

  • How can I convert the Nvidia driver installer into a deb?

    - by Oli
    Every so often there's a beta version of the Nvidia driver that I want to try out. This has happened today: there's been a big performance issue with version 295.40 and I want to try the shiny new XRandR-enabled 302.07. I'm more than able to download the installer, remove all the repo-installed driver files and install the new version but it's frankly a pain in the bottom to turn that around and go back to the repo version. It also means I have to re-install the driver manually each time there's a Kernel upgrade. The other option we commonly give people is a PPA but in this case I'm being really impatient. It's going to be a few days before any PPA gets this but I need to try this today. I've already manually installed it on the media centre and I'm eyeing up my desktop now. So how do I take an installer (eg: NVIDIA-Linux-x86-302.07.run) and convert that into a new nvidia-current/nvidia-current-updates package? Another way of asking this might be: How do people package the Nvidia drivers?

    Read the article

  • installing ubuntu on SSD

    - by kunal
    Going to install Ubuntu 10.10 on new intel x25M 80GB SSD. It will be fresh install. I have been googling for past few days and getting overwhelming articles/blogs/Q&As. One particularly very useful being: Optimize for SSD (I could not post other links as i dont have enough credits) But with so many suggestions and differences of opinions (on different links) this simple OS install process seems to be daunting task to me and I really want to stick with ubuntu (although have used for very short period of time). Can someone help me by answering few questions (yes they are repeated as i couldnt comprehend the answers elsewhere) which file system (ext2/3/4 or something else)? (consider SSD life) can it be changed after installation? should i partition the disk? (as we do in traditional HDD) for now, no plan of dual booting. Only ubuntu will live on scarce space of 80GB SSD. i have 2 GB RAM, should i still allocate swap space (if i dont allocate swap space, can i still hibernate the machine)? will swap space impact SSD life? should i consider putting additional 1GB RAM to avoid swap space? Linux experience - absolute novice intended usage - heavy browsing, programming, regular video/music and some other non-CPU/RAM-intensive programs. will backup big files to an external hard drive. laptop config - 3 yr old vaio, core2 duo, 2GB RAM Please pardon the repetition and i really appreciate anyone helping me getting started with ubuntu.

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

  • Inheriting projects - General Rules?

    - by pspahn
    This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >