Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 732/835 | < Previous Page | 728 729 730 731 732 733 734 735 736 737 738 739  | Next Page >

  • Site Review: Facebook.com and Blockbuster.com - Navigation Schemes

    After cycling through a list of my favorite sites I decided to select Facebook.com and Blockbuster.com for this  post because I found their navigation schemes very intuitive. Facebook in my opinion took a very simplistic and minimalistic approach when they designed their site and its navigation. For example, when you login to your account you will find on the upper left hand side a generic section of the site common areas to all users like news, messages, events, photos and friends. Below this in a separate navigation menu is a list of applications that a user has elected to access through bookmarks. Finally in the upper right hand corner of the site contains links to administer the user’s account like account settings, public profile, and a link back to the users’ home page. Blockbuster on the other had tried to make site navigation a little more slick by using a menu-submenu approach to navigation where user can click on things like Rent, Buy, On Demand, Games, Stores, and Gifts and a submenu of corresponding items appears below the original menu item. In addition they also took this approach and added categorized lists of movies that they offer on the homepage so that users can click on an item like “DVD Spotlight” and a list of movies represented as actual DVD box cases appear on the user’s screen so they can scroll through the list by using left and right arrows on either side of the images displayed. Both Facebook and Blockbuster have more than one navigation groupings because their respected sites are so large and offer an absorbent amount of features. Because of this reason they have to group the main functionality of information in to logical groups based on their actions they perform and the access to specific information. For example it would not make sense for Facebook to include a particular game you like to play within your account with a section pertaining to account administration. The game link would be completely out of place and really confuse the users experience because the groupings where not logically grouped. In addition I think that Facebook users would benefit if Facebook allowed its users to specify what they want on the general navigation from within their site or at least create a section to show frequently accessed pages or favorite sections. Finally regarding additional navigation, I think blockbuster users really benefit from the submenu system of categorizing data, and if fact Blockbuster even allows them to refine the information they are looking for through the use of secondary submenu systems allowing users to really drill down in to what they are looking for to learn more on. I do not think that having more than one navigation bar on a web page is not confusing for the user. For example if you have a navigation bar at the top of your page and at the bottom will allow users to move around the website easier because they can utilize the navigation closest to where their cursor is on the page. In regards to designers forcing all the navigation in to one navigation bar, I think it would be hard for the user to fully understand what is going on based on the size and complexity of the site they are dealing with. For example Blockbuster has a ton of content that could not easily be put in to one navigation bar. From my experience with both Facebook and Blockbuster, they both do a good job with cross browser compatibility. I have had no issues with either site in IE, Firefox, Chrome, and Safari over the years. In addition, I do not believe that either Facebook or Blockbuster require any additional plug-in to utilize their navigation bars.

    Read the article

  • ResourceSerializable: an alternate to ORM and ActiveRecord

    - by Levi Morrison
    A few opinionated reasons I don't like the traditional ORM and ActiveRecord patterns: They work only with a database. Sometimes I'm dealing with objects from an API and other objects from a database. All the implementations I have seen don't allow for that. Feel free to clue me in if I'm wrong on this. They are brittle. Changes in the database will likely break your implemenation. Some implementations can help reduce this, but a few of the ones I've seen don't. Their very design is influenced by the database. If I want to switch to using an API, I'll have to redesign the object to get it to work (likely). It seems to violate the single-responsibility pattern. They know what they are and how they act, but they also know how they are created, destroyed and saved? Seems a bit much. What about an approach that is somewhat more familiar in PHP: implementing an interface? In php 5.4, we'll have the JsonSerializable interface that defines the data to be json_encoded, so users will become accustomed to this type of thing. What if there was a ResourceSerializable interface? This is still an ORM by name, but certainly not by tradition. interface ResourceSerializable { /** * Returns the id that identifies the resource. */ function resourceId(); /** * Returns the 'type' of the resource. */ function resourceType(); /** * Returns the data to be serialized. */ function resourceSerialize(); } Things might be poorly named, I'll take suggestions. Notes: ResourceId will work for API's and databases. As long as your primary key in the database is the same as the resource ID in the API, there is no conflict. All of the API's I've worked with have a unique ID for the resource, so I don't see any issues there. ResourceType is the group or type associated with the resource. You can use this to map the resource to an API call or a database table. If the ResourceType was person, it could map to /api/1/person/{resourceId} and the table persons (or people, if it's smart enough). resourceSerialize() returns the data to be stored. Keys would identify API parameters and database table columns. This also seems easier to test than ActiveRecord / Orm implemenations. I haven't done much automated testing on traditional ActiveRecord/ORM implemenations, so this is merely a guess. But it seems that I being able to create objects independently of the library helps me. I don't have to use load() to get an existing resource, I can simply create one and set all the right properties. This is not so easy in the ActiveRecord / Orm implemenations I've dealt with. Downsides: You need another object to serialize it. This also means you have more code in general as you have to use more objects. You have to map resource types to API calls and database tables. This is even more work, but some ORMs and ActiveRecord implementations require you to map objects to table names anyway. Are there other downsides that you see? Does this seem feasible to you? How would you improve it? Note: I almost asked this on StackOverflow because it might be too vague for their standards, but I'm still not really familiar with programmers.stackexchange.com, so please help me improve my question if it doesn't shape up to standards here.

    Read the article

  • The SmartAssembly Rearchitecture

    - by Simon Cooper
    You may have noticed that not a lot has happened to SmartAssembly in the past few months. However, the team has been very busy behind the scenes working on an entirely new version of SmartAssembly. SmartAssembly 6.5 Over the past few releases of SmartAssembly, the team had come to the realisation that the current 'architecture' - grown organically, way before RedGate bought it, from a simple name obfuscator over the years into a full-featured obfuscator and assembly instrumentation tool - was simply not up to the task. Not for what we wanted to do with it at the time, and not what we have planned for the future. Not only was it not up to what we wanted it to do, but it was severely limiting our development capabilities; long-standing bugs in the root architecture that couldn't be fixed, some rather...interesting...design decisions, and convoluted logic that increased the complexity of any bugfix or new feature tenfold. So, we set out to fix this. Earlier this year, a new engine was written on which SmartAssembly would be based. Over the following few months, each feature was ported over to the new engine and extensively tested by our existing unit and integration tests. The engine was linked into the existing UI (no easy task, due to the tight coupling between the UI and old engine), and existing RedGate products were tested on the new SmartAssembly to ensure the new engine acted in the same way. The result is SmartAssembly 6.5. The risks of a rearchitecture Are there risks to rearchitecting a product like SmartAssembly? Of course. There was a lot of undocumented behaviour in the old engine, and as part of the rearchitecture we had to find this behaviour, define it, and document it. In the process we found some behaviour of the old engine that simply did not make sense; hence the changes in pruning & obfuscation behaviour in the release notes. All the special edge cases we had to find, document, and re-implement. There was a chance that these special cases would not be found until near the end of the project, when everything is functionally complete and interacting together. By that stage, it would be hard to go back and change anything without a whole lot of extra work, delaying the release by months. We always knew this was a possibility; our initial estimate of the time required was '4 months, ± 4 months'. And that was including various mitigation strategies to reduce the likelihood of these issues being found right at the end. Fortunately, this worst-case did not happen. However, the rearchitecture did produce some benefits. As well as numerous bug fixes that we could not fix any other way, we've also added logging that lets you find out exactly why a particular field or property wasn't pruned or obfuscated. There's a new command line interface, we've tested it with WP7.1 and Silverlight 5, and we've added a new option to error reporting to improve the performance of instrumented apps by ~10%, at the cost of inaccurate line numbers in reports. So? What differences will I see? Largely none. SmartAssembly 6.5 produces the same output as SmartAssembly 6.2. The performance of 6.5 will be much faster for some users, and generally the same as 6.2 for the remaining. If you've encountered a bug with previous versions of SmartAssembly, I encourage you to try 6.5, as it has most likely been fixed in the rearchitecture. If you encounter a bug with 6.5, please do tell us; we'll be doing another release quite soon, so we'll aim to fix any issues caused by 6.5 in that release. Most importantly, the new architecture finally allows us to implement some Big Things with SmartAssembly we've been planning for many months; these will fundamentally change how you build, release and monitor your application. Stay tuned for further updates!

    Read the article

  • Do MORE with WebCenter

    - by Michael Snow
    WEBCAST THURSDAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • What C++ coding standard do you use?

    - by gablin
    For some time now, I've been unable to settle on a coding standard and use it concistently between projects. When starting a new project, I tend to change some things around (add a space there, remove a space there, add a line break there, an extra indent there, change naming conventions, etc.). So I figured that I might provide a piece of sample code, in C++, and ask you to rewrite it to fit your standard of coding. Inspiration is always good, I say. ^^ So here goes: #ifndef _DERIVED_CLASS_H__ #define _DERIVED_CLASS_H__ /** * This is an example file used for sampling code layout. * * @author Firstname Surname */ #include <stdio> #include <string> #include <list> #include "BaseClass.h" #include "Stuff.h" /** * The DerivedClass is completely useless. It represents uselessness in all its * entirety. */ class DerivedClass : public BaseClass { //////////////////////////////////////////////////////////// // CONSTRUCTORS / DESTRUCTORS //////////////////////////////////////////////////////////// public: /** * Constructs a useless object with default settings. * * @param value * Is never used. * @throws Exception * If something goes awry. */ DerivedClass (const int value) : uselessSize_ (0) {} /** * Constructs a copy of a given useless object. * * @param object * Object to copy. * @throws OutOfMemoryException * If necessary data cannot be allocated. */ ItemList (const DerivedClass& object) {} /** * Destroys this useless object. */ ~ItemList (); //////////////////////////////////////////////////////////// // PUBLIC METHODS //////////////////////////////////////////////////////////// public: /** * Clones a given useless object. * * @param object * Object to copy. * @return This useless object. */ DerivedClass& operator= (const DerivedClass& object) { stuff_ = object.stuff_; uselessSize_ = object.uselessSize_; } /** * Does absolutely nothing. * * @param useless * Pointer to useless data. */ void doNothing (const int* useless) { if (useless == NULL) { return; } else { int womba = *useless; switch (womba) { case 0: cout << "This is output 0"; break; case 1: cout << "This is output 1"; break; case 2: cout << "This is output 2"; break; default: cout << "This is default output"; break; } } } /** * Does even less. */ void doEvenLess () { int mySecret = getSecret (); int gather = 0; for (int i = 0; i < mySecret; i++) { gather += 2; } } //////////////////////////////////////////////////////////// // PRIVATE METHODS //////////////////////////////////////////////////////////// private: /** * Gets the secret value of this useless object. * * @return A secret value. */ int getSecret () const { if ((RANDOM == 42) && (stuff_.size() > 0) || (1000000000000000000 > 0) && true) { return 420; } else if (RANDOM == -1) { return ((5 * 2) + (4 - 1)) / 2; } int timer = 100; bool stopThisMadness = false; while (!stopThisMadness) { do { timer--; } while (timer > 0); stopThisMadness = true; } } //////////////////////////////////////////////////////////// // FIELDS //////////////////////////////////////////////////////////// private: /** * Don't know what this is used for. */ static const int RANDOM = 42; /** * List of lists of stuff. */ std::list <Stuff> stuff_; /** * Specifies the size of this object's uselessness. */ size_t uselessSize_; }; #endif

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Yammer, Berkeley DB, and the 3rd Platform

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi; mso-bidi-language:EN-US;} If you read the news, you know that the latest high-profile social media acquisition was just confirmed. Microsoft has agreed to acquire Yammer for 1.2 billion. Personally, I believe that Yammer’s amazing success can be mainly attributed to their wise decision to use Berkeley DB Java Edition as their backend data store. :-) I’m only kidding, of course. However, as Ryan Kennedy points out in the video I recently blogged about, BDB JE did provide the right feature set that allowed them to reliably grow their business. Which in turn allowed them to focus on their core value add. As it turns out, their ‘add’ is quite valuable! This actually makes sense to me, a lot more sense than certain other recent social acquisitions, and here’s why. Last year, IDC declared that we are entering a new computing era, the era of the “3rd Platform.” In case you’re curious, the first 2 were terminal computing and client/server computing, IIRC. Anyway, this 3rd one is more complicated. This year, IDC refined the concept further. It now involves 4 distinct buzzwords: cloud, social, mobile, and big data. Yammer is a social media platform that runs in the cloud, designed to be used from mobile devices. Their approach, using Berkeley DB Java Edition with High Availability, qualifies as big data. This means that Yammer is sitting right smack in the center if IDC’s new computing era. Another way to put it is: the folks at Yammer were prescient enough to predict where things were headed, and get there first. They chose Berkeley DB to handle their data. Maybe you should too!

    Read the article

  • Impressions of my ASUS eee slate EP121 - Dual core 4GB, 64GB SSD

    - by tonyrogerson
    This thing is lovely, very nice bluetooth keyboard that has nice feedback on the keypress, there is no mouse but you can use the stylus or get yourself a bluetooth mouse, me, I've opted for a Microsoft ARC mouse which is a delight to use, the USB doors are a pain to open for the first time if like me you don't have any finger nails. It came as a suprise that the slate shows four processors, Dual Core with multi-threading, I didn't really look at the processor I was more interested in the amount of memory and the SSD; you don't get the full 4GB even with the 64 bit version of Windows 7 installed (which I immediately upgraded to Ultimate through my MSDN subscription). The box is extremely responsive - extremely, it loads Winword in literally a second. I've got office 2010 and onenote 2010 on there now; one problem is that on applying all (43) windows updates since the upgrade the machine is still sat on step 3 of 3 on the start up configuring updates screen after about an hour, you can't turn this machine off without using a paper clip to reset it and as I have just found you need a paper clip :). Installing Windows 7 SP1 was effortless. One of the first things I did on it was to reduce the size of the font, by default its set at 125%, my eye sight is ok :) so I've set that back down. Amazon Kindle for the PC works really well, plenty of text on the screen when viewed portrait, the case it comes with also allows the slate to stand up in various positions - portrait, horizontal - seems stable enough. The wireless works well, seems to have a better signal than my other two laptop machines which is good news. The gadget passed the pose test at work :). I use offline files to keep a copy of all my work stuff locally, I'm not sure what it is, well, its probably my server but whenever I try and sync it runs for a couple of minutes then fails with network name no longer contactable, funnily enough its fine from my big laptop so I can only guess this may be a driver type issue on the EP121 itself - very odd and very annoying. I do a lot of presenting and need to plug into a VGA project because most sites that's all that is offered, the EP121 has a mini-hdmi output which is great except for this scenario, hdmi is digital, vga is analogue, you will struggle to find a cost effective solution, I found HDFury and also a device HP do, however, a better solution appears to be getting a USB graphics adapter for instance the one I've ordered is the ClimaxDigital USB 2.0 to DVI,VGA or HDMI Adaptor which gives everything I need - VGA and DVI output and great resolution as well - ok, so fingers crossed because I'm presenting next Wednesday in Edinburgh and not taking my 300kg lenovo w700 (I'm sure my back just sighed in relief) - it certainly works really well on my LED TV, the install was simple - it just works! One of the several reasons for buying this piece of kit was to use it on my LED TV to remote into my main machine to check stuff whilst sat in my living room, also to watch webcasts and lecture videos in comfort away from my office, because of the wireless speed and limitation I'm opting for a USB network adapter from Belkin - that will also allow me to take advantage of my home gigabit network, there are only 2 usb ports on the slate so I'm going to knock up a hub so connecting it in is straight forward and simple, I'm also going to purchase a second power supply so I don't have to faff about with that either.I now have the developer x64 edition of SQL Server 2008 R2, yes everything :) - about 16GB left to play with on the machine now but that will be fine, I'll put AdventureWorks on there so I can play and demo stuff which is all I'm after from this, my development machine is significantly more powerful and meets my storage needs too.Travel test this weekend and next week, I'm in Dundee for my final exam for the masters degree.

    Read the article

  • Performing an upgrade from TFS 2008 to TFS 2010

    - by Enrique Lima
    I recently had to go through the process of migrating a TFS 2008 SP1 to a TFS 2010 environment. I will go into the details of the tasks that I went through, but first I want to explain why I define it as a migration and not an upgrade. When this environment was setup, based on support and limitations for TFS 2008, we used a 32 bit platform for the TFS Application Tier and Build Servers.  The Data Tier, since we were installing SP1 for TFS 2008, was done as a 64 bit installation.  We knew at that point that TFS 2010 was in the picture so that served as further motivation to make that a 64bit install of SQL Server.  The SQL Server at that point was a single instance (Default) installation too.  We had a pretty good strategy in place for backups of the databases supporting the environment (and this made the migration so much smoother), so we were pretty familiar with the databases and the purpose they serve. I am sure many of you that have gone through a TFS 2008 installation have encountered challenges and trials.  And likely even more so if you, like me, needed to configure your deployment for SSL.  So, frankly I was a little concerned about the process of migrating.  They say practice makes perfect, and this environment I worked on is in some way my brain child, so I was not ready nor willing for this to be a failure or something that would impact my client’s work. Prior to going through the migration process, we did the install of the environment.  The Data Tier was the same, with a new Named instance in place to host the 2010 install.  The Application Tier was in place too, and we did the DefaultCollection configuration to test and validate all components were in place as they should. Anyway, on to the tasks for the migration (thanks to Martin Hinselwood for his very thorough documentation): Close access to TFS 2008, you want to make sure all code is checked in and ready to go.  We stated a difference of 8 hours between code lock and the start of migration to give time for any unexpected delay.  How do we close access?  Stop IIS. Backup your databases.  Which ones? TfsActivityLogging TfsBuild TfsIntegration TfsVersionControl TfsWorkItemTracking TfsWorkItemTrackingAttachments Restore the databases to the new Named Instance (make sure you keep the same names) Now comes the fun part! The actual import/migration of the databases.  A couple of things happen here. The TfsIntegration database will be scanned, the other databases will be checked to validate they exist.  Those databases will go through a process of data being extracted and transferred to the TfsVersionControl database to then be renamed to Tfs_<Collection>. You will be using a tool called tfsconfig and the option import. This tool is located in the TFS 2010 installation path (C:\Program Files\Microsoft Team Foundation Server 2010\Tools),  the command to use is as follows:    tfsconfig import /sqlinstance:<instance> /collectionName:<name> /confirmed Where <instance> is going to be the SQL Server instance where you restored the databases to.  <name> is the name you will give the collection. And to explain /confirmed, well this means you have done a backup of the databases, why?  well remember you are going to merge the databases you restored when you execute the tfsconfig import command. The process will go through about 200 tasks, once it completes go to Team Foundation Server Administration Console and validate your imported databases and contents. We’ll keep this manageable, so the next post is about how to complete that implementation with the SSL configuration.

    Read the article

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28

    - by pinaldave
    Backup is the most important task for any database admin. Your data is at risk if you are not performing database backup. Honestly, I have seen many DBAs who know how to take backups but do not know how to restore it. (Sigh!) In this blog post we are going to discuss about one of my real experiences with one of my clients – BACKUPIO. When I started to deal with it, I really had no idea how to fix the issue. However, after fixing it at two places, I think I know why this is happening but at the same time, I am not sure the fix is the best solution. The reality is that the fix is not a solution but a workaround (which is not optimal, but get your things done). From Book On-Line: BACKUPIO Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPBUFFER Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPIO and BACKUPBUFFER Explanation: This wait stats will occur when you are taking the backup on the tape or any other extremely slow backup system. Reducing BACKUPIO and BACKUPBUFFER wait: In my recent consultancy, backup on tape was very slow probably because the tape system was very old. During the time when I explained this wait type reason in the consultancy, the owners immediately decided to replace the tape drive with an alternate system. They had a small SAN enclosure not being used on side, which they decided to re-purpose. After a week, I had received an email from their DBA, saying that the wait stats have reduced drastically. At another location, my client was using a third party tool (please don’t ask me the name of the tool) to take backup. This tool was compressing the backup along with taking backup. I have had a very good experience with this tool almost all the time except this one sparse experience. When I tried to take backup using the native SQL Server compressed backup, there was a very small value on this wait type and the backup was much faster. However, when I attempted with the third party backup tool, this value was very high again and was taking much more time. The third party tool had many other features but the client was not using these features. We end up using the native SQL Server Compressed backup and it worked very well. If I get to see this higher in my future consultancy, I will try to understand this wait type much more in detail and so probably I would able to come to some solid solution. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • Create and Track Your Own License Keys with PowerShell

    - by BuckWoody
    SQL Server used to have  cool little tool that would let you track your licenses. Microsoft didn’t use it to limit your system or anything, it was just a place on the server where you could put that this system used this license key. I miss those days – we don’t track that any more, and I want to make sure I’m up to date on my licensing, so I made my own. Now, there are a LOT of ways you could do this. You could add an extended property in SQL Server, add a table to a tracking database, use a text file, track it somewhere else, whatever. This is just the route I chose; if you want to use some other method, feel free. Just sharing here. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. And this is REALLY important. I include a disclaimer at the end of my scripts, but in this case you’re modifying your registry, and that could be EXTREMELY dangerous – only do this on a test server – and I’m just showing you how I did mine. It isn’t an endorsement or anything like that, and this is a “Buck Woody” thing, NOT a Microsoft thing. See this link first, and then you can read on. OK, here’s my script: # Track your own licenses # Write a New Key to be the License Location mkdir HKCU:\SOFTWARE\Buck   # Write the variables - one sets the type, the other sets the number, and the last one holds the key New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseType" -value "Processor" # Notice the Dword value here - this one is a number so it needs that. Keep this on one line! New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseNumber" -propertytype DWord -value 4 New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseKey" -value "ABCD1234"   # Read them all $LicenseKey = Get-Item HKCU:\Software\Buck $Licenses = Get-ItemProperty $LicenseKey.PSPath foreach ($License in $LicenseKey.Property) { $License + "=" + $Licenses.$License }   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows 8 Camp&ndash;Ways to Prepare

    - by Lori Lalonde
    When Windows 8 was announced at the BUILD conference back in September, it created quite a buzz among the developer community. By the spring of 2012,  Windows 8 Developer Camps started popping up everywhere imaginable. I received a lot of questions from CTTDNUG members about whether or not we would be hosting one locally. If you recall my post about the Windows Phone/Azure Developer Workshop that CTTDNUG hosted back in March, you’ll remember that the biggest hurdle to overcome when planning this type of event was finding the right venue. It took some time, but I finally found a venue that was available and provided the prerequisites needed to ensure this camp is a success. I am very excited that CTTDNUG will be hosting a Windows 8 Camp this summer in the Kitchener/Waterloo area. In fact, it’s coming up in less than 2 weeks. Clearly other developers are excited as well, because our registration numbers show that the event is already 70% full! On top of that, I was fortunate enough to also book two well-known evangelists to present and teach at this full day developer camp: Andrei Marukovich and Atley Hunter. This was the icing on the cake. With the content provided by Microsoft, and two local experts that live and breathe Windows 8 development, I know that I, along with other developers that attend this event, will have the opportunity to maximize our learning potential and hit the ground running. If you plan on attending a Windows 8 Developer Camp soon, and want to ensure you get the most “bang for your buck” (figuratively speaking, since these camps are free), there are some things you can do to prepare before the big day: 1) Install the prerequisites on your own device before the big day I can’t stress this enough. Otherwise, you will be spending valuable time during the hands-on period downloading and installing what is needed, rather than digging into the development and using that time to ask the experts on-hand about programming challenges, issues, questions you may have with respect to your development. Prerequisites: Windows 8 Release Preview Visual Studio 2012 RC Download the Windows 8 SDK Samples 2) Purchase, download, and read Charles Petzold’s newest book:  Programming Windows 6th Edition This is a great introduction to the type of content you will be learning about during the camp. Doing some light reading beforehand might raise some questions about the concepts discussed in the book, which will give you the opportunity to write them down and bring them with you to the camp. The experts on hand will be able to answer them for you. 3) Make use of the freebies that are available Telerik has recently released a preview of their RadControls for Metro. You can sign up to receive a license code to give you access to install the preview for free and start playing around with it. Syncfusion also offers a free download of their Metro Studio package, which is a collection of metro style icons that you can customize and use in your own applications. Last but not least, once you’ve installed the Windows 8 Release Preview on your own device, go to the Windows 8 Store and download a handful of the free apps that are available. Testing out other Metro apps may give you ideas of what you can do in your own apps and analyze what features you like: application flow, type of animations used, concepts that were leveraged, how live tiles were used, etc. I hope you found these tips to be useful as you embark on a new development journey! Although this post focused on how to prepare for a Windows 8 camp, the same ideas are there whichever developer camp/workshop/event you attend. Learning does not begin and end on the day of the event. Attending a developer camp is just one step of many to master whatever technology you are interested in. It is a continuous process, which is fully maximized when you do your homework beforehand, actively participate during,  and follow up by putting what you learned to practice afterwards. Happy coding!

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Mind Reading with the Raspberry Pi

    - by speakjava
    Mind Reading With The Raspberry Pi At JavaOne in San Francisco I did a session entitled "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi".  As part of this I showed some demonstrations of things I'd done using Java on the Raspberry Pi.  This is the first part of a series of blog entries that will cover all the different aspects of these demonstrations. A while ago I had bought a MindWave headset from Neurosky.  I was particularly interested to see how this worked as I had had the opportunity to visit Neurosky several years ago when they were still developing this technology.  At that time the 'headset' consisted of a headband (very much in the Bjorn Borg style) with a sensor attached and some wiring that clearly wasn't quite production ready.  The commercial version is very simple and easy to use: there are two sensors, one which rests on the skin of your forehead, the other is a small clip that attaches to your earlobe. Typical EEG sensors used in hospitals require lots of sensors and they all need copious amounts of conductive gel to ensure the electrical signals are picked up.  Part of Neurosky's innovation is the development of this simple dry-sensor technology.  Having put on the sensor and turned it on (it powers off a single AAA size battery) it collects data and transmits it to a USB dongle plugged into a PC, or in my case a Raspberry Pi. From a hacking perspective the USB dongle is ideal because it does not require any special drivers for any complex, low level USB communication.  Instead it appears as a simple serial device, which on the Raspberry Pi is accessed as /dev/ttyUSB0.  Neurosky have published details of the command protocol.  In addition, the MindSet protocol document, including sample code for parsing the data from the headset, can be found here. To get everything working on the Raspberry Pi using Java the first thing was to get serial communications going.  Back in the dim distant past there was the Java Comm API.  Sadly this has grown a bit dusty over the years, but there is a more modern open source project that provides compatible and enhanced functionality, RXTXComm.  This can be installed easily on the Pi using sudo apt-get install librxtx-java.  Next I wrote a library that would send commands to the MindWave headset via the serial port dongle and read back data being sent from the headset.  The design is pretty simple, I used an event based system so that code using the library could register listeners for different types of events from the headset.  You can download a complete NetBeans project for this here.  This includes javadoc API documentation that should make it obvious how to use it (incidentally, this will work on platforms other than Linux.  I've tested it on Windows without any issues, just by changing the device name to something like COM4). To test this I wrote a simple application that would connect to the headset and then print the attention and meditation values as they were received from the headset.  Again, you can download the NetBeans project for that here. Oracle recently released a developer preview of JavaFX on ARM which will run on the Raspberry Pi.  I thought it would be cool to write a graphical front end for the MindWave data that could take advantage of the built in charts of JavaFX.  Yet another NetBeans project is available here.  Screen shots of the app, which uses a very nice dial from the JFxtras project, are shown below. I probably should add labels for the EEG data so the user knows which is the low alpha, mid gamma waves and so on.  Given that I'm not a neurologist I suspect that it won't increase my understanding of what the (rather random looking) traces mean. In the next blog I'll explain how I connected a LEGO motor to the GPIO pins on the Raspberry Pi and then used my mind to control the motor!

    Read the article

  • Today's Links (6/30/2011)

    - by Bob Rhubart
    James Gosling Says He Doesn't Care About Java But here's the rest of the story: "What I really care about is the Java Virtual Machine as a concept," says Gosling, "because that is the thing that ties it all together; it's the thing that makes Java the language possible; it's the thing that makes things work on all kinds of different platforms; and it makes all kinds of languages able to coexist." Virtual Developer Day: SOA Accelerate Your Development with Oracle SOA Suite. Learn how in this FREE on-line workshop with Hands-on labs July 12th 9 am to 1:30 PM PST" July 12th 9 am to 1:30 PM PST Podcast: Toronto Architect Day Panel Discussion Part 3 (of 4) is now available, in which the panel (including Oracle ACE Director Cary Millsap and InfoQ editor and co-founder Floyd Marinescu) discusses public vs private cloud as the best strategy for small businesses and start-ups. WebLogic Weekly for June 27th, 2011 | James Bayer Bayer shares the latest resources for those with WebLogic on the brain. Griffiths Waite at Oracle Open World | Mark Simpson Oracle ACE Director Mark Simpson share information on the presentations he's scheduled to give at Oracle OpenWorld San Francisco 2011. Kscope Solid Service Bus Implementations Peter Paul van de Beek's Kscope11 presentation "is aimed at supporting architects and especially developers to choose the right integration infrastructure for a job." Migration To Java EE 6 With Spring 3 - ...Could Become "Interesting" | Adam Bien "Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database," says David Dorf. "The term 'noSQL' is often used in this context as well." Book Review: "Designing With the Mind In Mind" | Abhinav Agarwal According to Abhinav Agarwal, Jeff Johnson's new book is about "the theory of how the mind perceives information, of how humans understand what they read, and how our eyes are attuned to paying attention to not just what's happening in front of us but also at the periphery of our vision." BPM 11g Advanced Workshop | Martien van den Akker Martien van den Akker shares his thoughts on both the workshop he recently attended and on the Oracle BPM 11g product. Fusion Applications - What You Need To Know: Product Families | Floyd Teter "Fusion Applications are organized into seven groups of related products called Product Families," observes Oracle ACE Director Floyd Teter. "While the product features are organized according to the Business Process Model and can cross the boundaries of product families, the product family groupings are an easy way to wrap your mind around Fusion Apps." Grid Control: Refreshing Weblogic Domains | Dave Best Dave Best shares tips for avoiding problems when using grid control to centrally manage/monitor your environment. Webcast: Oracle to Announce Datanomic Integration Plans The combination of Datanomic technology and the previous acquisition of Silver Creek Systems will deliver a complete, integrated and best-of-breed solution for Data Quality. Learn about Oracle’s strategy and product plans and how the new products acquired from Datanomic will impact your organization. July 19, 2011, 8:00am PT / 11:00am ET. Speakers include Michael Weingartner (Vice President, Product Development, Oracle), Martin Boyd (Senior Director, Product Strategy, Oracle), and Dain Hansen (Director, Product Marketing, Fusion Middleware, Oracle).

    Read the article

  • Is there a good [and modern] reason to not have static HTML pages with AJAX content , rather than generate pages?

    - by user1725
    Assumptions: We don't care about IE6, and Noscript users. Lets pretend we have the following design concept: All your pages are HTML/CSS that create the ascetics, layout, colours, general design related things. Lets pretend this basic code below is that: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <link href="/example.css" rel="stylesheet" type="text/css"/> <script src="example.js" type="text/javascript"></script> <head> <body> <div class="left"> </div> <div class="mid"> </div> <div class="right"> </div> </body> </html> Which in theory should produce, with the right CSS, three vertical columns on the web page. Now, here's the root of the question, what are the serious advantages and/or disadvantages of loading the content of these columns (lets assume they are all indeed dynamic content, not static) via AJAX requests, or have the content pre-set with a scripting language? So for instance, we would have, in the AJAX example, lets asume jquery is used on-load: //Multiple http requests $("body > div.left").load("./script.php?content=news"); $("body > div.right").load("./script.php?content=blogs"); $("body > div.mid").load("./script.php?content=links"); OR--- //Single http request $.ajax({ url: './script.php?content=news|blogs|links', method: 'json', type: 'text', success: function (data) { $("body > div.left").html(data.news); $("body > div.right").html(data.blogs); $("body > div.mid").html(data.links); } }) Verses doing this: <body> <div class="left"> <?php echo function_returning_news(); ?> </div> <div class="mid"> <?php echo function_returning_blogs(); ?> </div> <div class="right"> <?php echo function_returning_links(); ?> </div> </body> I'm personally thinking right now that doing static HTML pages is a better method, my reasoning is: I've separated my data, logic, and presentation (ie, "MVC") code. I can make changes to one without others. Browser caches mean I'm just getting server load mostly for the content, not the presentation wrapped around it. I could turn my "script.php" into a more robust API for the website. But I'm not certain or clear that these are legitimately good reasons, and I'm not confidently aware of other issues that could happen, so I would like to know the pros-and-cons, so to speak.

    Read the article

  • Dropped impression 25 days after restructure

    - by Hamid
    Our website is a non English property related website (moshaver.com) which is similar to rightmove.co.uk. On September 2012 our website was adversely affected by Panda causing our Google incoming clicks to drop from around 3000 clicks to less than a thousand. We were hoping that Google will eventually realize that we are not a spam website and things will get better. However, in August 2013 we were almost sure that we needed to do something, so we started to restructure our web content. We used the canonical tag to remove our search results and point to our listing pages, using the noindex tag to remove it from our listing pages which does not have any properties at the moment. We also changed title tags to more friendly ones, in addition to other changes. Our changes were effective on 10th August. As shown in the graph taken from Google Analytics Search Engine Optimization section, these changes has resulted in an increase in the number of times Google displayed our results in its search results. Our impressions almost doubled starting 15th August. However, as the graph shows, our CTR dropped from this date from around 15% to 8%. This might have been because of our changed title tags (so people were less likely to click on them), or it might be normal for increased impressions. This situation has continued up until 10th September, when our impressions decreased dramatically to less than a thousand. This is almost 30% of our original impressions (before website restructure) and 15% of the new impressions. At the same time our impressions has increased dramatically to around 50%. I have two theories for this increase. The first one is that these statistics are less accurate for lower impressions. The second one is that Google is now only displaying our results for queries directly related to our website (our name, our url), and not for general terms, such as "apartments in a specific city". The second theory also explains the dramatic decrease in impression as well. After digging the analytic data a little more, I constructed the following table. It displays the breakdown of our impressions, clicks and ctr in different Google products (web and image) and in total. What I understand from this table is that, most of our increased impressions after restructure were on the image search section. I don't think users of search would be looking for content in our website. Furthermore, it shows that the drop in our web search ctr, is as dramatic of the overall ctr (-30% in compare to -60%) . I thought posting it here might help you understand the situation better. Is it possible that Google has tested our new structure for 25 days, and then decided to decrease our impressions because of the the new low CTR? Or should we look for another factor? If this is the case, how long does it usually take for Google to give us another chance? It has been one month since our impressions has dropped.

    Read the article

  • Join our Marketing Intelligence Team in Dublin!

    - by jessica.ebbelaar
    Do you want to work with the brightest minds in the industry? Want to be part of a global team that’s changing the way the world does business? Then Oracle is the place for YOU. Join now as a Marketing Intelligence Representative. You will have the opportunity to develop within the role through working alongside the Business Development, Sales and Marketing teams within Oracle. The Marketing Intelligence Group is viewed as a true talent pool for the Business Development and Sales Teams. Oracle offers a structured training programme for Marketing Intelligence Representatives and Business Development Consultants including our approved sales certified training methodology along with regular product training. Miriam started her career as a Marketing Intelligence Representative six years ago, and shares what she has learned and how her career is progressing. My Career Path at Oracle: June 2005 – October 2005: Profiler in the Marketing Intelligence Team November 2005 - October 2006: Team Leader for MIT November 2006 - February 2008: Business Development Consultant Iberia March 2008 - December 2010: Lead Management Specialist Currently: Sales Program Manager for Iberia & Benelux What did you learn from your role in the Market Intelligence Team Being a Profiler helped me to understand how an organisation works, from the beginning to the end. It is like being in University but being paid! The three key things I learnt in this role are: Knowledge of customers: You are on the phone with over 70 customers daily. Not only does this give you an overview of the IT infrastructure of the customers companies but also how to manage their questions and rejections. Essentially you are learning how to convert their pain and complaints into business opportunities. Knowledge of Oracle: As a Profiler you get an excellent overview of how Oracle works internally, from Marketing to Sales, without forgetting the Operations Team. Knowledge about yourself: As a Profiler I learnt how to work outside of my comfort zone, there is a new challenge almost every day but Oracle are there to support you every step of the way. Oracle really invests in developing the MIT Team and as a Profiler you can expect product and sales training on a monthly basis. How did you progress from MIT to Business Development Group (BDG)? I made sure that my manager knew from the very beginning that I was keen to progress at Oracle and I was set very clear objectives to help me reach my goal.  My manager was very supportive and ensured I received all the training I needed. After I became a Team Leader of Profiling, I moved to an Iberia BDG position. How you feel your experience in MI has helped you in your current role? I truly believe that the MI position gives you a great overview of Oracle and this has really helped me in my current position.  I am the Sales Program Manager for IBERIA & Benelux and in my campaigns I need to target the right companies and the right job specs.  My time in the Market Intelligence team really helped me to understand how to focus and target my campaigns so I know I don’t miss any business opportunities! How would you sum up your Oracle experience? Oracle is a big organisation with big opportunities. With the right skills and with the great training programs that Oracle offer, the only limit is you! If you have any questions related to this article feel free to contact [email protected] You can find all our job opportunities via http://campus.oracle.com. Tags van Technorati: Marketing Intelligence,Benelux,Iberia,Profiler,Business Development,Sales Representatives,BDG,Business Development Group,opportunities,Oracle

    Read the article

  • Content Challenge: You Can Only Get it Here

    - by Mike Stiles
    Part of the content conundrum for brands is figuring out what kind of content customers would find cool, desirable, and relevant. The mere fact many brands have no idea what this content might be is, in itself, pretty alarming. You’d have to have a pretty thorough lack of involvement with and understanding of your customers to not know what they might like. But despite what should be a great awakening in which consumers are using every technology and trick in the book to shield themselves from ads and commercials, brand self-obsession continues as marketers concentrate on their message, their campaign, what they want to say, and what they want social users to do. When individuals conduct themselves in that same fashion on Facebook and Twitter, it gets tiresome and starts losing value pretty quickly. Their posts eventually get hidden. Conversely, friends who post things that consistently entertain or inform, with little self-marketing desperation involved, win the coveted “show all updates” setting. Of course brands are going to use social to market. It’s pretty much the point of having social in the marketing mix. And yes, people who follow a brand’s Twitter account or “Like” a brand’s Facebook Page implicitly state they want to know what’s going on with that brand’s products and services. But if you have a Facebook friend that assumes you want every one of her posts to be about what wine she likes (Mitsubishi’s current campaign is even based around weeding out pretentious Facebook friends, then running them over), then you know how it must feel for your fans and followers to get a sales pitch for your crackers or whatever you’re selling every single time. Is there such a thing as content that doesn’t sell but that still advances the brand and makes the consumer more involved and valuable? Of course. And perhaps there are no better companies than enterprise brands to do it. Enterprise organizations are large enough to go beyond a product and engage readers/viewers at higher, broader levels…communicating expertise across entire sectors, subjects and industries. You’re going from pitchman to news source, and getting full credit for it as the presenter. A recent GigaOM article pointed out the success a San Francisco-based startup called Crunchyroll is having. Their niche (and they proudly admit it’s a niche) is providing Japanese anime, Korean drama and Asian live action content to countries that can’t get it any other way via licensing deals. Shows are available in HD and on the same day they air in the host country. Crunchyroll not only gets 8 million viewers a month, they have 100,000 paying subscribers at $7-12/month. Got a point, Mike? I do happen to have one. Crunchyroll illustrates the content opportunity enterprise companies have…which is to determine your “area,” the interest graph of your customers, then provide content that speaks to and satisfies those interests that can’t be found anywhere else. At least not in the same style, or of the same quality, or with the same authority. Do what no one else is doing. Provide what no one else is providing in your sector. If underserved users are willing to pay monthly for access to awkwardly moving cartoon dragons, imagine the audience you could attract with free, useful, non-sales content in your customers’ area of interest. It’s an audience you’ll want in place when the time does come to put out that marketing message. A content challenge is better than a content conundrum any day.

    Read the article

  • Be the surgeon

    - by Rob Farley
    It’s a phrase I use often, especially when teaching, and I wish I had realised the concept years earlier. (And of course, fits with this month’s T-SQL Tuesday topic, hosted by Argenis Fernandez) When I’m sick enough to go to the doctor, I see a GP. I used to typically see the same guy, but he’s moved on now. However, when he has been able to roughly identify the area of the problem, I get referred to a specialist, sometimes a surgeon. Being a surgeon requires a refined set of skills. It’s why they often don’t like to be called “Doctor”, and prefer the traditional “Mister” (the history is that the doctor used to make the diagnosis, and then hand the patient over to the person who didn’t have a doctorate, but rather was an expert cutter, typically from a background in butchering). But if you ask the surgeon about the pain you have in your leg sometimes, you’ll get told to ask your GP. It’s not that your surgeon isn’t interested – they just don’t know the answer. IT is the same now. That wasn’t something that I really understood when I got out of university. I knew there was a lot to know about IT – I’d just done an honours degree in it. But I also knew that I’d done well in just about all my subjects, and felt like I had a handle on everything. I got into developing, and still felt that having a good level of understanding about every aspect of IT was a good thing. This got me through for the first six or seven years of my career. But then I started to realise that I couldn’t compete. I’d moved into management, and was spending my days running projects, rather than writing code. The kids were getting older. I’d had a bad back injury (ask anyone with chronic pain how it affects  your ability to concentrate, retain information, etc). But most of all, IT was getting larger. I knew kids without lives who knew more than I did. And I felt like I could easily identify people who were better than me in whatever area I could think of. Except writing queries (this was before I discovered technical communities, and people like Paul White and Dave Ballantyne). And so I figured I’d specialise. I wish I’d done it years earlier. Now, I can tell you plenty of people who are better than me at any area you can pick. But there are also more people who might consider listing me in some of their lists too. If I’d stayed the GP, I’d be stuck in management, and finding that there were better managers than me too. If you’re reading this, SQL could well be your thing. But it might not be either. Your thing might not even be in IT. Find out, and then see if you can be a world-beater at it. But it gets even better, because you can find other people to complement the things that you’re not so good at. My company, LobsterPot Solutions, has six people in it at the moment. I’ve hand-picked those six people, along with the one who quit. The great thing about it is that I’ve been able to pick people who don’t necessarily specialise in the same way as me. I don’t write their T-SQL for them – generally they’re good enough at that themselves. But I’m on-hand if needed. Consider Roger Noble, for example. He’s doing stuff in HTML5 and jQuery that I could never dream of doing to create an amazing HTML5 version of PivotViewer. Or Ashley Sewell, a guy who does project management far better than I do. I could go on. My team is brilliant, and I love them to bits. We’re all surgeons, and when we work together, I like to think we’re pretty good! @rob_farley

    Read the article

< Previous Page | 728 729 730 731 732 733 734 735 736 737 738 739  | Next Page >