Search Results

Search found 16758 results on 671 pages for 'great programmer'.

Page 503/671 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • links for 2011-02-21

    - by Bob Rhubart
    Calling all enterprise architects | Enterprise architecture - InfoWorld Nominations are now open for the 2011 InfoWorld Enterprise Architecture Award, honoring companies whose enterprise architecture initiatives made a difference (tags: ping.fm) Red Tape, Part II : OTN Garage "How do you back up all of that storage? Tape: really fast tape. And, lots of it. This creates a whole variety of very interesting challenges today, elevating the topic to – at the very least – glamorous, but I think it qualifies as being downright hot!" - Kemer Thomson (tags: oracle entarch datastorage) The Buttso Blathers: Using Secure Config Files with the WebLogic Maven Plugin "WebLogic Server has long had a mechanism to provide a more secure way of connecting to the Administration Server from client utilities such that the username and password do not need to be specified and therefore can’t be seen from the process list or command shell history." (tags: oracle weblogic) World-class EA | Open Group Blog "World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value." - Mick Adams (tags: enterprisearchitecture entarch opengroup) Enterprise Process Maps: A Process Picture worth a Million Words (Telecommunications Architecture Corner) "Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment..." - Raul Goycoolea (tags: oracle otn telecommunications businessprocess entarch bpm) Andrejus Baranovskis's Blog: WebCenter PS3 Customization Manager- Long Awaited Feature for MDS Oracle ACE Director Andrejus Baranovski shares "really great news for those of you who are working on MDS personalization and customization support in Oracle Fusion Middleware applications." (tags: oracle otn oracleace webcenter enterprise2.0) Oracle WebCenter: Common User Experience Architecture (Oracle Enterprise 2.0 Blog) Kellsey Ruppel describes "how the new release of Oracle WebCenter delivers a Common User Experience Architecture." (tags: oracle otn webcenter enterprise2.0) Java / Oracle SOA blog: Do your SOA deployments & configuration with AIA Oracle ACE Edwin Biemond illustrates the use of the SOA Suite / FMW deployment framework, "one of the Application Integration Architecture (AIA) hidden gems." (tags: oracle oracleace soa otn fusionmiddleware) Enterprise Software Development with Java: Clustering Stateful Session Beans with GlassFish 3.1 Oracle ACE Director Markus Eisele describes what he did "to get a Stateful Session Bean failover scenario working with two instances on one node." (tags: oracle otn oracleace glassfish) Enhanced REST Support in Oracle Service Bus 11gR1 (SOA Thinker) Jeff Davies illustrates how to re-implement the REST-ful Products services using query strings for passing parameter information. (tags: oracle otn soa REST)

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • Shelving &ndash; What is it &ndash; and more importantly, can it help me?

    - by Chris Skardon
    Since we shifted to TFS we’ve had the ability to perform what is known as ‘shelving’. Shelving (whilst not a wholly new topic in the world of SCC) is new to us, and didn’t exist in our previous SCC solution – SVN. Soo… what is it? What? Shelving is a way to check-in but not check-in your code. By shelving you submit a copy of your ‘pending changes’ to the SCC server, (which maintains a list of the shelvesets) and once that is done you can either continue working, or undo your changes, safe in the knowledge that a backup copy exists on the server. You can unshelve your code at any time and get back to the state you were when you shelved. Yer, that is great but why not just check it in?? Shelvesets don’t have to build. The shelveset you put in there could be entirely broken, or it might solve every bug in the system – shelves aren’t continuously integrated so you can shelve anything. Hmmmm… What else? Shelving allows us to do some pretty cool stuff that beforehand was quite frankly a pain. For instance – Gated Check-ins are implemented via the shelving mechanism, when code is checked-in, what you’re actually doing is shelving it, the Build Controller will build the shelveset with the original code and if it succeeds, the code will be committed, if it fails – well – it’s only you that has to fix the code :) Other nice features are things like the ability to share code you are working on… For example, if I was having trouble with a particular piece of code, I could shelve it, and then you (yes you) could then get that shelveset and check out the problem for yourself, and if you fix it?? Well – you could check-it in! Nice, but day-to-day shizzle? Let’s say you’ve been working on your project and your project manager comes over to you and says: “Hey, errr, bad times, there is an urgent bug we need you to fix, it needs to go out now!” (also for this to play out – we’ll need to assume you’re currently working in the 'release’ branch for another bug fix (maybe))… You could undo all your current changes (obviously you’ll probably backup your code using zip or something I imagine) fix the bug, then re-copy your backup over the top, or you could shelve and unshelve. Perhaps some other uses will awaken the shelver in you… :) Before each checkin – if you shelve, you no longer need to worry (if indeed you do) about resolving conflicts and mysteriously losing your code… Going home at night? Not checking in straight away? Why not shelve, this way – should the worst come to the worst and your local pc gives up, you can just get the shelveset onto another machine and be up and running in literally seconds minutes…

    Read the article

  • WebAPI and MVC4 and OData

    - by Aligned
    I was looking closer into WebAPI, specificially how to use OData to avoid writing GetCustomerByCustomerId(int id) methods all over the place. I had problems just returning IQueryable<T> as some sites suggested in the WebpAPI (Assembly System.Web.Http.dll, v4.0.0.0).  I think things changed in the release version and the blog posts are still out of date. There is no [Queraable] as the answer to this question suggests. Once I get WebAPI.Odata Nuget package, and added the [Queryable] to the method http://localhost:57146/api/values/?$filter=Id%20eq%201 worked (don’t forget the ‘$’). Now the main question is whether I should do this and how to stop logged in users from sniffing the url and getting data for other users. I John V. Peterson has a post on securing WebAPI with headers and intercepting the call at that point. He had an update to use HttpMessageHandlers instead. I think I’ll use this to force the call to contain some kind of unique code for the user, but I’m still thinking about this. I will not expose this to the public, just to my calls with-in my Forms Authentication areas. Other links: http://robbincremers.me/2012/02/16/building-and-consuming-rest-services-with-asp-net-web-api-and-odata-support/ ~lots of good information John V Peterson example: https://github.com/johnvpetersen/ASPWebAPIExample ~ all data access goes through the WebApi and the web client doesn’t have a connection string ~ There is code library for calling the WebApi from MVC using the HttpClient. It’s a great starting point http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx ~ Beta (9/18/2012) Nuget package to help with what I want to do? ~ has a sample code project with examples http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx http://blogs.msdn.com/b/alexj/archive/2012/08/21/web-api-queryable-current-support-and-tentative-roadmap.aspx http://stackoverflow.com/questions/10885868/asp-net-mvc4-rc-web-api-odata-filter-not-working-with-iqueryable JSON, pass the correct format in the header (Accept: application/json). $format=JSON doesn’t appear to be working. Async methods built into WebApi! Look for the GetAsync methods.

    Read the article

  • Oracle is Sponsoring LinuxCon Europe 2012

    - by Zeynep Koch
    Architecture is amazing in Barcelona but you will also be impressed with Oracle Linux sessions in LinuxCon Europe as well.  Oracle is one of the key sponsors in LinuxCon Europe and we have great sessions to show you why Oracle Linux is best for your "IT architecture"! We also have a booth where you can pick up latest Oracle Linux and Oracle VM DVD Kit and Virtualization for Dummies booklet. Don't forget to visit us at technology showcase Booth #19. Oracle Sessions at LinuxCon Europe 2012:  1. OCFS2: Status and Overview - Lenz Grimmer, Oracle Wednesday November 7, 2012 10:40am - 11:25am Venue: Diamant OCFS2, Oracle's general-purpose shared-disk cluster file system for Linux has come a long way since its development started in 2003. Distributed under the GPL and part of the mainline Linux Kernel, it is also included in Oracle Linux and plays a vital role in products like Oracle VM, Oracle RAC or E-Business Suite. This presentation will provide a general technical overview as well as an update on the latest developments. Attendees will learn about the features and improvements that set OCFS2 apart from other Linux-based cluster file systems, including: Heartbeat implementation: global vs. local heartbeats Storage optimizations: Extent-based Allocations, Hole punching, Reflinks 2. Status of Linux Tracing - Elena Zannoni, Oracle Wednesday November 7, 2012 11:35am - 12:20am Venue: Diamant There have been many developments recently in the Linux tracing area. The tracing infrastructure in the kernel is getting more robust, with  the recent introduction of uprobes to allow the implementation of user  space tracing, and new features of perf. There are many tracing tools to choose from, including the newest kid on the block, DTrace for Linux.  This talk will take the audience through the main tracing facilities  available today whether more tightly integrated with the kernel code, or maintained stand alone. 3. MySQL Security Model and Pluggable Authentication - Kristofer Pettersson, Oracle Wednesday November 7, 2012 1:50pm - 2:35pm Venue: Diamant With an increasing security awareness among web and cloud developers, knowing how to secure your database from unauthorized or malicious access has become important. This talk explains the MySQL security model, pluggable authentication, new auditing features and rounds off with some pointers on how to securely integrate your database into your Linux web stack. We look forward to seeing you in Barcelona, Spain on November 5-9, 2012. Register today 

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Tell me a Story

    - by Geoff N. Hiten
    I recently had a friend ask me to review his resume.  He is a very experienced DBA with excellent skills.  If I had an opening I would have hired him myself.  But not because of the resume.  I know his skill set and skill levels, but there is no way his standard resume can convey that.  A bare bones list of job titles and skills does not set you apart from your competition, nor does it convey whether you have junior or senior level skills and experience.  The solution is to not use the standard format. Tell me a story.  I want to know what you were responsible for.  Describe a tough project and how you saved time/money/personnel on that project.  Link your work activity to business value.  Drop some technical bits in there since we do work in a technical field, but show me what you can do to add value to my business well above what I would pay you.  That will get my attention. The resume exists for one primary and one secondary reason.  The primary reason is to get the interview.  A Resume won’t get you a job, so don’t expect it to.  The secondary reason is to give you and the interviewer a starting point for conversations.  If I can say “Tell me more about when….” and reference an item from your resume, then that is great for both of us.  Of course, you better be able to tell me more, both from the technical and the business side, at least if I am hiring a senior or higher level position.  As for the junior DBAs, go ahead and tell your story too.  Don’t worry about how simple or basic your projects or solutions seem.  It is how you solved the problem and what you learned that I am looking for.  If you learn rapidly and think like a DBA, I can work with that, regardless of you current skill level.

    Read the article

  • 12.04 boots fine, with graphical splash screen, but then Monitor "out of range"

    - by Jim Bednar
    I see dozens of posts from people whose monitors are saying "out of range" under Ubuntu; seems like there are some serious problems in Ubuntu with autodetection of monitor capabilities. :-( But none of the many, many suggestions I found have solved my problem, and right now I can't use anything graphical on this machine! History: I installed Ubuntu 12.04 on my HP Proliant Microserver N40L, which worked reasonably at the default resolution across several reboots. At some point I noticed that the proprietary video driver was not in use, and tried to install one to get better window-drawing speeds, but it failed with some sort of error, and I gave up on that. A few weeks later when I next rebooted, it showed the usual BIOS screen and various boot loading screens (including GRUB), and then the usual purple Ubuntu splash screen with the dots showing that things were loading, but when it finished booting the monitor went black and eventually showed "Out of range" (with no other information). Given that there were several weeks between reboots (it's a server, after all), I've no idea if it was some system update, trying to install the proprietary drivers, or something else that caused the problem. Anyway, the system has booted fine, as I can do Ctrl-Alt-F1 to get a text prompt and can log in there. But Ctrl-Alt-F7 goes back to the out of range error. Some posters said to try Ctrl-Alt-- (minus) to cycle through resolutions until one works, but that didn't have any visible effect. Many, many others said it was a grub problem, which seems unlikely given that grub's screen looks fine, but I tried editing /etc/default/grub to set a particular resolution (trying many of them) and running update-grub, with no apparent effect. Rebooting into failsafe mode works the same as regular mode. Replacing xorg.conf with xorg.conf.failsafe works the same too. I'm at my wits' end! Isn't there anything I can do to convince Ubuntu to choose a mode that the monitor supports? E.g. the one that it is using for the splash screen? I don't need great resolution on this machine, just anything that works!!!!! Help!!!!!! Please!!!!

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • Understanding clojure keywords

    - by tjb1982
    I'm taking my first steps with Clojure. Otherwise, I'm somewhat competent with JavaScript, Python, Java, and a little C. I was reading this artical that describes destructuring vectors and maps. E.g. => (def point [0 0]) => (let [[x y] point] => (println "the coordinates are:" x y)) the coordinates are: 0 0 but I'm having a difficult time understanding keywords. At first glance, they seem really simple, as they just evaluate to themselves: => :test :test But they seem to be used is so many different ways and I don't understand how to think about them. E.g., you can also do stuff like this: => (defn full-name [& {first :first last :last}] => (println first last)) => (full-name :first "Tom" :last "Brennan") Tom Brennan nil This doesn't seem intuitive to me. I would have guessed the arguments should have been something more like: (full-name {:first "Tom" :last "Brennan"}) because it looks like in the function definition that you're saying "no required arguments, but a variable number of arguments comes in the form of a single map". But it seems more like you're saying "no required arguments, but a variable number of arguments comes which should be a list of alternating keywords and values... ?" I'm not really sure how to wrap my brain around this. Also, things like this confuse me too: => (def population {:humans 5 :zombies 1000}) => (:zombies population) 1000 => (population :zombies) 1000 How do maps and keywords suddenly become functions? If I could get some clarification on the use of keywords in these two examples, that would be really helpful. Update I've also seen http://stackoverflow.com/questions/3337888/clojure-named-arguments and while the accepted answer is a great demonstration of how to use keywords with destructuring and named arguments, I'm really looking more for understanding how to think about them--why the language is designed this way and how I can best internalize their use.

    Read the article

  • Oracle went back to school !....

    - by Cristina Ciocoiu
    I am Georgiana, Contracts Manager for Oracle University and Advanced Customer Services in Romania. I started working for Oracle for 4 years ago as a Contracts Specialist. Two years ago I became a manager of a team of 9 Contracts Specialists. On a sunny day in March some members of my team visited the students of the Academy of Economic Studies, accompanied by Recruitment colleagues. This was part of a new initiative to raise awareness on career opportunities at Oracle. We spent approximately 2 hours illustrating and explaining different aspects of the day-to-day activities of an Oracle Contracts Specialist to the future graduates of the Academy. Role Play Since a role play is worth 1000 job descriptions, the audience witnessed an entertaining performance on the contracting process from the phase of the negotiation with the customer to actual signing of the contract. The main focus was on the role of Contracts Specialist liaising with all the groups involved and ensuring that the contract is compliant with Oracle policies while generating the expected revenue. However, the team took other roles as well i.e. Sales Representative, Customer, Business Approver and Lawyer to demonstrate their role in the process. As each of these roles only have a small slice of the big pie, it is vital to understand what happens before and after you come on stage as a Contract Specialist. Contracts Specialist Being a Contracts Specialist goes beyond simply knowing what policies apply, it means understanding Oracle’s core business model, understanding customers’ requests and addressing them in the most effective way. The job also involves connecting smaller teams that are often geographically dispersed across multiple regions so that they become a bigger, stronger and successful team. You are the expert in this key position that can facilitate the closing of a deal or stop it from happening if the risk is too high. The role play provided insights on both. Why I love this job Events of this kind are sometimes just as useful for the “recruiters” as for the “recruits”. For me, as a presenter, it was an excellent opportunity to think about the many reasons why I love what I do in the Contracts department every day and to share this with the students. I wanted to explain to the audience, who are still considering education and career possibilities, that what we do in Contracts DOES make a difference. You have the power to achieve targets that you did not think reachable before. Working in the dynamic Oracle environment shapes you as a person and there is a lot to take away from this experience. Looking back to my years in the Academy (I graduated from the Academy myself), I wish I could have listened to more people talking about their great jobs and about how I could get there. If those were Oracle people I might have been writing this article sooner. J If you are interested to join the Contracts team please click here for more information or contact lavinia.protopopescu-AT-oracle-DOT-com. You can find all openings in Romania via http://campus.oracle.com

    Read the article

  • Parallel MSBuild FTW - Build faster in parallel

    - by deadlydog
    Hey everyone, I just discovered this great post yesterday that shows how to have msbuild build projects in parallel Basically all you need to do is pass the switches “/m:[NumOfCPUsToUse] /p:BuildInParallel=true” into MSBuild. Example to use 4 cores/processes (If you just pass in “/m” it will use all CPU cores): MSBuild /m:4 /p:BuildInParallel=true "C:\dev\Client.sln" Obviously this trick will only be useful on PCs with multi-core CPUs (which we should all have by now) and solutions with multiple projects; So there’s no point using it for solutions that only contain one project.  Also, testing shows that using multiple processes does not speed up Team Foundation Database deployments either in case you’re curious Also, I found that if I didn’t explicitly use “/p:BuildInParallel=true” I would get many build errors (even though the MSDN documentation says that it is true by default). The poster boasts compile time improvements up to 59%, but the performance boost you see will vary depending on the solution and its project dependencies.  I tested with building a solution at my office, and here are my results (runs are in seconds): # of Processes 1st Run 2nd Run 3rd Run Avg Performance 1 192 195 200 195.67 100% 2 155 156 156 155.67 79.56% 4 146 149 146 147.00 75.13% 8 136 136 138 136.67 69.85%   So I updated all of our build scripts to build using 2 cores (~20% speed boost), since that gives us the biggest bang for our buck on our solution without bogging down a machine, and developers may sometimes compile more than 1 solution at a time.  I’ve put the any-PC-safe batch script code at the bottom of this post. The poster also has a follow-up post showing how to add a button and keyboard shortcut to the Visual Studio IDE to have VS build in parallel as well (so you don’t have to use a build script); if you do this make sure you use the .Net 4.0 MSBuild, not the 3.5 one that he shows in the screenshot.  While this did work for me, I found it left an MSBuild.exe process always hanging around afterwards for some reason, so watch out (batch file doesn’t have this problem though).  Also, you do get build output, but it may not be the same that you’re used to, and it doesn’t say “Build succeeded” in the status bar when completed, so I chose to not make this my default Visual Studio build option, but you may still want to. Happy building! ------------------------------------------------------------------------------------- :: Calculate how many Processes to use to do the build. SET NumberOfProcessesToUseForBuild=1  SET BuildInParallel=false if %NUMBER_OF_PROCESSORS% GTR 2 (                 SET NumberOfProcessesToUseForBuild=2                 SET BuildInParallel=true ) MSBuild /maxcpucount:%NumberOfProcessesToUseForBuild% /p:BuildInParallel=%BuildInParallel% "C:\dev\Client.sln"

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Sweden Azure Group with Michele Laroux Bustamente &amp; Maartin Balliauw Thursday 22nd May

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2014/05/19/156418.aspxSweden Azure Group (SWAG) has the privilege of welcoming Michele Laroux Bustamente and Maartin Balliauw to present sessions at our meeting this Thursday. Michele and Maartin are two of the world’s leading experts in Cloud Computing and Azure, and will be taking time out from their busy schedules to share their ideas with us, and answer any questions. Knowit Stockholm are kindly hosting the event at their offices, and providing food and refreshments. It should be a great evening. You can register for the event here. Azure Q & A - Michele Leroux Bustamante In this interactive Q & A session Michele Leroux Bustamante will be on hand to share her wealth of experience on Azure related issues. If you are new to Azure and wanting some tips to get started, or an experienced developer needing to negotiate the legal and political protocols related to Cloud Computing Michele will have been there, done that, and be willing to share her experiences. This session will be entirely driven by that attendees, so please come prepared with questions. Reducing latency on the web with the Windows Azure CDN – Maarten Balliauw Serving up content on the Internet is something our web sites do daily. But are we doing this in the fastest way possible? How are users in faraway countries experiencing our apps? Why do we have three webservers serving the same content over and over again? In this session, we’ll explore the Windows Azure Content Delivery Network or CDN, a service which makes it easy to serve up blobs, videos and other content from servers close to our users. We’ll explore simple file serving as well as some more advanced, dynamic edge caching scenarios. Michele Leroux Bustamante Michele Leroux Bustamante is CIO at Solliance (solliance.net), cofounder of Snapboard (snapboard.com), and is recognized as a Microsoft Regional Director and MVP. Michele is a thought leader with over 20 years specializing in building scalable and secure end-to-end system design, identity and access management, and cloud computing technologies – for companies of all sizes. In recent years Michele has also helped launch several startup business ventures and has been a mentor to startups in several accelerator programs – providing both technical and business guidance. Michele shares her experiences through presentations and keynotes all over the world, and has been publishing regularly in technology journals. Maarten Balliauw Maarten Balliauw is a Technical Evangelist at JetBrains. His interests are all web: ASP.NET MVC, PHP and Windows Azure. He’s a Microsoft Most Valuable Professional (MVP) for Azure and an ASPInsider. He has published many articles in both PHP and .NET literature such as MSDN magazine and PHP architect. Maarten is a frequent speaker at various national and international events such as MIX (Las Vegas), TechDays, DPC, …

    Read the article

  • Becoming the well-integrated content company (and combating AIUTLVFS)

    - by Lance Shaw
    Every single day, each of us create more and more content. Sometimes it is brand new material and many times it is iterations of existing content, but no one would argue that information and content growth is growing at an almost exponential rate. With all this content being created and stored, a number of problems naturally arise. One of the most common issues that users run into is "Am I Using The Latest Version of this File Syndrome", or AIUTLVFS. This insidious syndrome is all too common and results in ineffective, poor or downright wrong business decisions being made.  When content or files are unavailable or incorrect within the scope of key business processes, the chance for erroneous and costly business decisions is magnified even further. For many companies, the ideal scenario is to be able to connect multiple business systems, both old and new, into one common content repository.  Not only does this reduce content duplication, it also helps guarantee that everyone in various departments is working off the proverbial "same page".  Sounds simple - but for many organizations, the proliferation of file shares, SharePoint sites, and other storage silos of content keep the dream of a more efficient business a distant one. We've created some online assets to help you in your evaluation and eventual improvement of your current content management and delivery systems. Take a few minutes to check out our Online Assessment Tool.  It's quick, easy and just might provide you with insights into how you can improve your current content ecosystem. While you are there, check out our new Infographic that outlines common issues faced by companies today. Feel free to save our informative Infographic PDF and share it with business colleagues and your management to help them understand the business costs and impact of inaction. Together we can stop AIUTLVFS in its tracks and run our businesses more effectively than ever. Additionally, we hope you will take a few minutes to visit our new and informative webpages dedicated to the value of a well connected, fully integrated content management system. It's a great place to learn more about how integrating WebCenter Content into your infrastructure can lower your operational costs while boosting process and worker efficiency.

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • Friday Fun: Destroy the Web

    - by Mysticgeek
    Another Friday has arrived and it’s time to screw off on company time. Today we take a look at a unique game Add-on for Firefox called Destroy the Web. Destroy the Web Once you install the Add-on and restart Firefox, you’ll see the Destroy the Web icon on the toolbar. Click on it to destroy whatever webpage you’re on.   The game starts up and gives you a 3 second countdown… Now move the target over different elements of the page to destroy them. You have 30 seconds to destroy the site contents. If you are angry at a particular company or the one you work for, this can give you some fun stress relief by destroying it’s website. After you’ve destroyed as much of the page as possible, you’re given a score. You get different amounts of points for destroying certain elements on the page. Basically destroy as much as possible to get the most points. You can submit your score and check out some of the scoreboard leaders as well.  There are some cool sound effects and arcade sounding background music, so make sure to turn the volume down while playing. Or you can go into the Add-on options and disable it. If you want a unique way to let off some steam before the weekend starts, this is a fun way of doing it. Install Destroy the Web for Firefox Similar Articles Productive Geek Tips Friday Fun: Destroy Everything with Indestructo TankWeekend Fun: Easter Egg in Spybot Search & DestroyFriday Fun: BoombotSecure Computing: Create Scheduled Scans With Spybot Search & DestroyFriday Fun: Castle Game Collection TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Dual Boot Ubuntu and Windows 7 What is HTML5? Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good

    Read the article

  • Restoring MSDB

    - by David-Betteridge
    We recently performed a disaster recovery exercise which included the restoration of the MSDB database onto our DR server.  I did a quick google to see if there were any special considerations and found the following MS article.  Considerations for Restoring the model and msdb Databases (http://msdn.microsoft.com/en-us/library/ms190749(v=sql.105).aspx).   It said both the original and replacement servers must be on the same version,  I double-checked and in my case they are both SQL Server 2008 R2 SP1 (10.50.2500).. So I went ahead and stopped SQL Server agent, restored the database and restarted the agent.  Checked the jobs and they were all there, everything looked great, and was until the server was rebooted a few days later.Then the syspolicy_purge_history job started failing on the 3rd step with the error message “Unable to start execution of step 3 (reason: The PowerShell subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended). The step failed.”   A bit more googling pointed me to the msdb.dbo.syssubsystems table SELECT * FROM msdb.dbo.syssubsystems WHERE start_entry_point ='PowerShellStart'   And in particular the value for the subsystem_dll. It still had the path to the SQLPOWERSHELLSS.DLL but on the old server. The DR instance has a different name to the live instance and so the paths are different.   This was quickly fixed with the following SQL Use msdb; GO sp_configure 'allow updates', 1 ; RECONFIGURE WITH OVERRIDE ; GO UPDATE msdb.dbo.syssubsystems SET subsystem_dll='C:\Program Files\Microsoft SQL Server\MSSQL10_50.DR\MSSQL\binn\SQLPOWERSHELLSS.DLL' WHERE start_entry_point ='PowerShellStart'; GO sp_configure 'allow updates', 0; RECONFIGURE WITH OVERRIDE ; GO Stopped and started SQL Server agent and now the job completes.   I then wondered if anything else might be broken, SELECT subsystem_dll FROM msdb.dbo.syssubsystems Shows a further 10 wrong paths – fortunately for parts of SQL (replication, SSIS etc) we aren’t using! Lessons Learnt 1.       DR exercises are a good thing! 2.       Keep the Live and DR environments as similar as possible.    

    Read the article

  • Stop trying to be perfect

    - by Kyle Burns
    Yes, Bob is my uncle too.  I also think the points in the Manifesto for Software Craftsmanship (manifesto.softwarecraftsmanship.org) are all great.  What amazes me is that tend to confuse the term “well crafted” with “perfect”.  I'm about to say something that will make Quality Assurance managers and many development types as well until you think about it as a craftsman – “Stop trying to be perfect”. Now let me explain what I mean.  Building software, as with building almost anything, often involves a series of trade-offs where either one undesired characteristic is accepted as necessary to achieve another desired one (or maybe stave off one that is even less desirable) or a desirable characteristic is sacrificed for the same reasons.  This implies that perfection itself is unattainable.  What is attainable is “sufficient” and I think that this really goes to the heart both of what people are trying to do with Agile and with the craftsmanship movement.  Simply put, sufficient software drives the greatest business value.   I've been in many meetings where “how can we keep anything from ever going wrong” has become the thing that holds us in analysis paralysis.  I've also been the guy trying way too hard to perfect some function to make sure that every edge case is accounted for.  Somewhere in there, something a drill instructor said while I was in boot camp occurred to me.  In response to being asked a question by another recruit having to do with some edge case (I can barely remember the context), he said “What if grasshoppers had machine guns?  Would the birds still **** with them?”  It sounds funny, but there's a lot of wisdom in those words.   “Sufficient” is different for every situation and it’s important to understand what sufficient means in the context of the work you’re doing.  If I’m writing a timesheet application (and please shoot me if I am), I’m going to have a much higher tolerance for imperfection than if you’re writing software to control life support systems on spacecraft.  I’m also likely to have less need for high volume performance than if you’re writing software to control stock trading transactions.   I’d encourage anyone who has read this far to instead of trying to be perfect, try to create software that is sufficient in every way.  If you’re working to make a component that is sufficient “better”, ask yourself if there is any component left that is not yet sufficient.  If the answer is “yes” you’re working on the wrong thing and need to adjust.  If the answer is “no”, why aren’t you shipping and delivering business value?

    Read the article

  • JavaScript Class Patterns &ndash; In CoffeeScript

    - by Liam McLennan
    Recently I wrote about JavaScript class patterns, and in particular, my favourite class pattern that uses closure to provide encapsulation. A class to represent a person, with a name and an age, looks like this: var Person = (function() { // private variables go here var name,age; function constructor(n, a) { name = n; age = a; } constructor.prototype = { toString: function() { return name + " is " + age + " years old."; } }; return constructor; })(); var john = new Person("John Galt", 50); console.log(john.toString()); Today I have been experimenting with coding for node.js in CoffeeScript. One of the first things I wanted to do was to try and implement my class pattern in CoffeeScript and then see how it compared to CoffeeScript’s built-in class keyword. The above Person class, implemented in CoffeeScript, looks like this: # JavaScript style class using closure to provide private methods Person = (() -> [name,age] = [{},{}] constructor = (n, a) -> [name,age] = [n,a] null constructor.prototype = toString: () -> "name is #{name} age is #{age} years old" constructor )() I am satisfied with how this came out, but there are a few nasty bits. To declare the two private variables in javascript is as simple as var name,age; but in CoffeeScript I have to assign a value, hence [name,age] = [{},{}]. The other major issue occurred because of CoffeeScript’s implicit function returns. The last statement in any function is returned, so I had to add null to the end of the constructor to get it to work. The great thing about the technique just presented is that it provides encapsulation ie the name and age variables are not visible outside of the Person class. CoffeeScript classes do not provide encapsulation, but they do provide nicer syntax. The Person class using native CoffeeScript classes is: # CoffeeScript style class using the class keyword class CoffeePerson constructor: (@name, @age) -> toString: () -> "name is #{@name} age is #{@age} years old" felix = new CoffeePerson "Felix Hoenikker", 63 console.log felix.toString() So now I have a trade-off: nice syntax against encapsulation. I think I will experiment with both strategies in my project and see which works out better.

    Read the article

  • Do I need to create my own or use a commercial server for the features and matchmaking options I want my game to support?

    - by baptzmoffire
    So I'm developing an indie turn-based game for iOS and, in coding up a Game Center matchmaking class, I'm starting to question whether Game Center is even the best choice for what I want this game to do. I need to figure out whether I need to create my own server, invest in a preexisting client or server service, or if I even need to use a server at all. If I do need to use a ready-made service other than Game Center, which server would accomodate my game's needs best? I have limited resources and funds. Here is the list of features I want my game to support, ideally: Turn-based gameplay (a la "with Friends" and "with Buddies" games) Smart matchmaking (matching users up with other players of comparable skill/achievements) Random matchmaking Facebook matchmaking Specific username matchmaking Contact list matchmaking A way to select what "type" of match you want to challenge an opponent to. (In random, smart, and Facebook matchmaking, there will be different "wagers" the player can make. [e.g. "I wanna play a random opponent for 1000 points. Now, I wanna play my Facebook buddy for 1,000,000 points."] There will be a predetermined range of amounts you can play for. It won't be customizable.) Buddies list capability (Game-buddies, as opposed to contacts and Facebook) A higher concurrent game cap than Game Center offers (which I still can't really find a straight answer on) Scalability (it should support 2 or 20,000,000 players) Objective-C compatibility Flexibility (for all the stuff I haven't thought of yet) Am I dreaming, here? Is there even a service that can handle all of these features? Do I need to invest months in learning a networking language to build my own? If so, how much would I need to spend on hardware? I've been looking around all morning and, so far, the only seemingly viable option is SmartFox. Under "Everything and the kitchen sink" section here, it says they support "virtual world with Zones, Rooms and RoomGroups, create complex game challenges, send invitations, manage buddy lists, create custom permission profiles, oversee the security aspects and tons more." http://www.smartfoxserver.com/overview/platform Is there an option that Im just overlooking? Thanks for any help anyone can provide. Sorry for the long poast. One last question: Does anyone know which server Dice with Buddies uses? I was experimenting with how many concurrent games I could get going and my ADHD kicked in at about 80 games. 80 concurrent games would be great for my game, but again, I need the other features I mentioned too. Thanks again.

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • What location to put bootloader, when running multiple drives and partition

    - by Matt G
    I have Win8 on my desktop, where a 120G SSD is used to run windows and some select applications, while I have a 2TB HDD to provide basic file storage and where possible, install applications instead of on the SSD. I want to install Ubuntu on a new partition of the HDD (I allocated 300GB, with 5GB swap file). I've used a USB to install the OS, which seemed to have done the job. However, after prompting for a restart, I can no longer boot to ubuntu. During instillation I was confused about where to install the "boot loader instillation". I ended up selecting "/dev/stb" because I figured I would be able to boot with BIOS by selecting the HDD drive as a priority over the SSD. The bootloader is a large part of where I think I went wrong. My partition system looked something like this: /dev/sta ... //SSD ~120 GB /dev/sta1 NTFS (350 MB) //Win8System /dev/sta2 NTFS (118 GB) //Win8C-Drive /dev/stb ... //HDD ~2TB /dev/stb1 NTFS (1563 GB) //FileStorage /dev/stb5 Free Space (300 GB) //Space I want to use for Linux (NOTE: Created two partitions from the 300GB, ~5GB and 295GB. stb5,stb6.) It'd be great if I could get an explanation of what drive you'd select for the boot loader and why, and what selections won't work with regards to the Boot Loader Instillation. I think I understand what Grub is, but I have no idea on how to use it, or play around with it. I seem to be able to get back into OS from my usb, however I believe it's just showing me a preview/trial of Ubuntu (ie, can't access any of the system NTFS drives). Note, if I try to install from the USB again, it will recognize that a version of Ubuntu 13.10 exists on the system. Apologies in advance, have used windows all my life, don't really know to much about Linux at all. Did have a brief skim over some similar questions, didn't find anything too useful. - Where to install bootloader when installing Ubuntu as secondary OS? - ubuntu 12.10 dual boot with windows 8 on two hdds - Dual-boot Windows 7 and Ubuntu on two SSDs with UEFI

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >