Search Results

Search found 46009 results on 1841 pages for 'first timer'.

Page 357/1841 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • Oracle Customer Experience Summit @ OpenWorld

    - by Christie Flanagan
    This first-ever Oracle Customer Experience Summit @ OpenWorld kicked off yesterday, bringing together established thought leaders and practitioners in customer experience. The first day saw noted marketing and customer experience thought leader, Seth Godin, take the stage to discuss how rapidly accelerating change and adoption are driving new behaviors and higher expectations in a massively disruptive transformation in which the customer now holds the power. His presentation gave us in-depth insight into this always-connected, always-sharing experience revolution we are witnessing.If you haven’t yet made it over to the Oracle Customer Experience Summit at The Westin St. Francis and the recently made over Oracle Square (aka Union Square), there’s still time today and tomorrow to network with industry peers and hear best practices from those who have steered their ventures through the disruptive trends of customer experience and have proven, successful strategies to share for driving strategic customer-centric initiatives. If you’re interested in learning how Oracle WebCenter helps businesses meet the demands of the customer experience revolution, be sure to check out these sessions at the Oracle Customer Experience Summit later today:Using the Online Customer Experience to Drive Engagement and Marketing Success Thursday, Oct 4, 4:15 PM - 5:15 PM - St. Francis - GeorgianMariam Tariq - Senior Director Product Management, Oracle Stephen Schleifer - Senior Principal Product Manager, Oracle Richard Backx - Business IT Architect/Consultant, KPN NL Netco CE Channels Online The online channel is a critical means of reaching and engaging customers. Online marketing efforts today must be targeted, interactive, and consistent to provide customers with a seamless experience. These efforts must include integrated management of Web, mobile, and social channels—supported by cross-channel customer data and campaigns—and integration with commerce to drive an engaging and differentiated online customer experience. Attend this session to learn how you can use the online channel to increase customer loyalty and drive the success of your marketing initiatives.Empowering Your Frontline Employees: Sales and Service Enterprise Collaboration Thursday, Oct 4, 5:30 PM - 6:30 PM - St. Francis - Elizabethan ABStephen Fioretti - VP, Product Management, Oracle Peter Doolan - Group Vice President, Sales Engineering, Oracle Andrew Kershaw - Sr Director Business Development, Oracle Marty Marcinczyk - VP Customer Experience Engineering, Comcast A focus on the employee experience is critical, because it can make or break your customers’ experiences, directly or indirectly. Engaged and empowered frontline employees become your best advocates and inspire your brand champions. This session explores proven approaches and tools, including social collaboration tools, that can help you empower and enable your frontline teams to improve customer and employee experiences.And before you go, you'll also want to explore the Innovation Tents in Oracle Square which feature leading-edge customer experience demonstrations; attend our customer journey mapping workshop; and learn at sessions focused on innovating differentiated experiences that drive cross-functional alignment.

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • Finally, upgrade from Nokia X3 to Samsung Galaxy S III

    This time, something slightly different but nonetheless not less interesting, hopefully. Living on a remote island like Mauritius, ill-praised 'Cyber Island' in the Indian Ocean, has its advantages in life style and relaxed environment to life in but in terms of technological aspects it can be quite a nightmare. Well, I guess this might be different story to report about... one day. Cyber Island Mauritius Despite it's shiny advertisement as Cyber Island and business in ICT hub to Africa, Mauritius is not on the latest track of available models in computer hardware or, in the context of this article, cellulars or smart-phone, or communication technology in general. Okay, I have to admit that this statement is only partly true. Money can buy, even here in Mauritius. Luckily, there are ways and ways to deal with this outcry of modern, read: technological, civilisation issues. Online shopping you might think? Yes, for sure, until you discover in your checkout procedure that a small island in the Indian Ocean isn't a preferred destination for delivery and the precious time you spent on putting your items into your cart and feeding your personal level of anticipation gets ruined on the last stint. Ordering from abroad saves you money Anyway, I got in touch with my personal courier and luckily there were some extra-kilos left in the luggage. First obstacle sorted, we have a Transporter! Okay, on the next occasion off to Amazon online and using their Prime service for fast delivery. Actually, the order was placed on Saturday evening and everything got delivered on Tuesday morning - nice job in less than 72 hours. Okay, among the items of that shopping rush I ordered a shiny Samsung Galaxy S III 16GB in oceanic blue - did I mention, that you hardly get a blue model in Mauritius? - for my BWE. Interesting side-notes: First, Amazon Germany dropped the prices for roughly 30% on the S3, and we got the 16GB model for less than 500 Euro (or approx. Rs. 19.500,-) compared to the usual Rs. 27.000,- on the local market. It even varies whether the local price is inclusive or exclusive VAT (15%). Second, since a while she was bothering me to get an iPhone and an iPad for her, fair enough I thought, decent hardware, posh design and reliable services. Until we watched the 'magical' introduction of Samsung's new models at the IFA exhibition, she read the bashing comments on Google+ on the iPhone 5 and I gave her a brief summary on the law suit between Apple and Samsung in the USA. So, yes, Samsung USA is right, the next big thing is already here - literally. My BWE loves the look and touch of the Galaxy S3. And for me it was more cost-effective in terms of purchases done at the App Store, ups, Play Store. Transfer of contacts, text messages and media files Okay, now that the hardware is in place, how to transfer all those contacts, text messages, media files, etc. between those two devices? In the past, I used to use the Nokia Communication Suite between various models but now for Android? Well, as usual Google and Bing are reliable friends and among the first hits I came across an article about How to Transfer Contacts from Nokia to Android. Couldn't be easier, right? Well, sort of... my main Windows systems are already running on Windows 8, and this actually caused problems with the mobile/smart-phone device drivers. The article provides the download for an older version 1.10 which upgrades to 2.11 (as time of writing this entry) but both couldn't get the Galaxy S3 and the Nokia connected. Shame on me... the product page clearly doesn't mention Windows 8 (for now) and Windows 8 isn't available for the general audience at all... After I took a spare machine running on Windows Vista everything went smooth. Software installed, upgrade done, device drivers for Android automatically downloaded and installed, and the same painless routine for the Nokia part. I think, I rebooted the system twice during the whole setup procedure but hey, it was more or less a distraction while coding some stuff in ASP.NET MVC and Telerik Kendo UI. The transfer of contacts and text messages was done via Wondershare MobileGo for Android, and all media files by moving the additional microSD card from one device to the other. But even without an external SD card, it would have been very easy to copy the files via Windows Explorer directly. Little catch and excellent service Fine, we are almost done and the only step left is to shift the SIM card... Ouch, gotcha! The X3 uses a standard size SIM card while the S III only accepts microSIM form factor. What an irony, bigger smartphone needs smaller SIM card. Luckily, the next showroom of Emtel is just 5 mins away up the road, and the service staff over there know their job. Finally, after roughly 10 mins of paper work, activation and small chit-chat, the S3 came to life on the mobile network. Owning a smart-phone now and knowing that my BWE would like to interact more on social networks away from home, especially to upload pictures and provide local 'check-ins', I activated a data package for her in advance, too. Even that it is Saturday, everything was already done and ready to be used. Nice bonus: The Emtel clerk directly offered me to set up the configuration for the Emtel data services, yes sure, go ahead, this saves me to search for that in the settings. Okay, spoiler-alert here, setting a static APN to access the Emtel network and the internet wouldn't be a challenge. But hey, she already had the phone in her hands and I could keep my eyes on the children. Well done, Emtel! Resume Thanks to the useful software package by Wondershare is was a hands-free experience to transfer all the data from a Nokia mobile on Symbian S60 to a Samsung Galaxy S III on Android Ice Cream Sandwich (ICS). In the future, this wont be a serious issue at all anymore thanks to synchronisation services and cloud storage. And for now, I'm only waiting for the official upgrades for Jelly Bean.

    Read the article

  • 7 Tips For Strong On Page SEO

    On page SEO is the first thing a webmaster should consider when planning the marketing of his website. Follow these simple yet effective methods to give your SEO campaign a kick start.

    Read the article

  • Heating Up the Search Results With Local SEO

    With the help of an SEO agencies and local SEO, local businesses are dominating the first page of search results. There are millions of websites in existence today and With so much competition, website owners must have high rankings with the search engines in order to succeed online.

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • How to Make Money Online Part 4 - Keyword Research is Critical

    Last time we looked at choosing the best products on ClickBank to promote, now we will look at keyword research, and how critical it is to make money online. You may have heard the term keyword research if you have been looking around for a while, but if you are looking to make or earn money online for the first time, I will give a brief description of what keyword research is and why it is so important.

    Read the article

  • How is precedence determined in C pointers?

    - by ankur.trapasiya
    I've come across two pointer declarations that I'm having trouble understanding. My understanding of precedence rules goes something like this: Operator Precedence Associativity (), [ ] 1 Left to Right *, identifier 2 Right to Left Data type 3 But even given this, I can't seem to figure out how to evaluate the following examples correctly: First example float * (* (*ptr)(int))(double **,char c) My evaluation: *(ptr) (int) *(*ptr)(int) *(*(*ptr)(int)) Then, double ** char c Second example unsigned **( * (*ptr) [5] ) (char const *,int *) *(ptr) [5] *(*ptr)[5] *(*(*ptr)[5]) **(*(*ptr)[5]) How should I read them?

    Read the article

  • An XEvent a Day (29 of 31) – The Future – Looking at Database Startup in Denali

    - by Jonathan Kehayias
    As I have said previously in this series, one of my favorite aspects of Extended Events is that it allows you to look at what is going on under the covers in SQL Server, at a level that has never previously been possible. SQL Server Denali CTP1 includes a number of new Events that expand on the information that we can learn about how SQL Server operates and in today’s blog post we’ll look at how we can use those Events to look at what happens when a database starts up inside of SQL Server. First...(read more)

    Read the article

  • The Team Behind SQL Saturday 60 In Cleveland

    - by AllenMWhite
    Last July I asked the assembled group at the Ohio North SQL Server Users Group meeting if they'd be interested in putting on a SQL Saturday. Enthusiastically, they said yes! A great group of people came together and met, first monthly, then every other week, and finally every week, taking time from their families to do the things necessary to put together a SQL Saturday event here in Cleveland. Their work has been amazing and any of you attending our event will see what a great job they've all done....(read more)

    Read the article

  • Finding the Twins when Implementing Catmull-Clark subdivision using Half-Edge mesh [migrated]

    - by Ailurus
    Note: The description became a little longer than expected. Do you know a readable implementation of this algorithm using this mesh? Please let me know! I'm trying to implement Catmull-Clark subdivision using Matlab (because later on the results have to be compared with some other stuff already implemented in Matlab). First try was with a Vertex-Face mesh, the algorithm works but it is of course not very efficient (since you need neighbouring information for edges and faces). Therefore, I'm now using a Half-Edge mesh (info), see also the paper of Lutz Kettner. Wikipedia link to the idea behind Catmull-Clark SDV: Wiki. My problem lies in finding the Twin HalfEdges, I'm just not sure how to do this. Below I'm describing my thoughts on the implementation, trying to keep it concise. Half-Edge mesh (using indices to Vertices/HalfEdges/Faces): Vertex (x,y,z,Outgoing_HalfEdge) HalfEdge (HeadVertex (or TailVertex, which one should I use), Next, Face, Twin). Face (HalfEdge) To keep it simple for now, assume that every face is a quadrilateral. The actual mesh is a list of Vertices, HalfEdges and Faces. The new mesh will consist of NewVertices, NewHalfEdges and NewFaces, like this (note: Number_... is the number of ...): NumberNewVertices: Number_Faces + Number_HalfEdges/2 + Number_Vertices NumberNewHalfEdges: 4 * 4 * NumberFaces NumberNewfaces: 4 * NumberFaces Catmull-Clark: Find the FacePoint (centroid) of each Face: --> Just average the x,y,z values of the vertices, save as a NewVertex. Find the EdgePoint of each HalfEdge: --> To prevent duplicates (each HalfEdge has a Twin which would result in the same HalfEdge) --> Only calculate EdgePoints of the HalfEdge which has the lowest index of the Pair. Update old Vertices Ok, now all the new Vertices are calculated (however, their Outgoing_HalfEdge is still unknown). Next step to save the new HalfEdges and Faces. This is the part causing me problems! Loop through each old Face, there are 4 new Faces to be created (because of the quadrilateral assumption) First create the 4 new HalfEdges per New Face, starting at the FacePoint to the Edgepoint Next a new HalfEdge from the EdgePoint to an Updated Vertex Another new one from the Updated Vertex to the next EdgePoint Finally the fourth new HalfEdge from the EdgePoint back to the FacePoint. The HeadVertex of each new HalfEdge is known, the Next HalfEdge too. The Face is also known (since it is the new face you're creating!). Only the Twin HalfEdge is unknown, how should I know this? By the way, while looping through the Vertices of the new Face, assign the Outgoing_HalfEdge to the Vertices. This is probably the place to find out which HalfEdge is the Twin. Finally, after the 4 new HalfEdges are created, save the Face with the HalfVertex index the last newly created HalfVertex. I hope this is clear, if needed I can post my (obviously not-yet-finished) Matlab code.

    Read the article

  • Macbook (non-pro, late 2010) Power Management Issues relating to Nvidia-Current drivers

    - by gbvitaobscura
    I have tried many potential solutions but to no avail! (this is going to be a long post) Essentially, when I first install ubuntu (I have tested 10.11-12.04 beta) I can change the brightness of the macbook backlight. One of the first things I usually do once my machine finishes installing is enter "sudo apt-get install nvidia-current" into the terminal to get my Nvidia graphics card set up. Before I install nvidia-current power management works flawlessly. After installing nvidia-current my graphics driver works mostly properly (more on that later). When I press F1 or F2 notify osd pops up and shows icon for the backlight along with a meter which changes according to how much I press F1 or F2, but the backlight does not change brightness at all. Two supplementary facts: when I am running off of battery power the backlight does not dim ever, also changing the backlight brightness through the terminal does not work (it recognizes the command but changes nothing). Things which did not work: 1. editing xorg.conf 2. python script/hack 3. editing grub Things which did work: 1. removing Nvidia-Current Final piece of information: Although my graphics card works in every way but Power Management it can not be found through System Settings/Details (Graphics Unknown) I'm using the NVIDIA GeForce 320M. the output of lspci is: 00:00.0 Host bridge: NVIDIA Corporation MCP89 HOST Bridge (rev a1) 00:00.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:01.0 RAM memory: NVIDIA Corporation Device 0d6d (rev a1) 00:01.1 RAM memory: NVIDIA Corporation Device 0d6e (rev a1) 00:01.2 RAM memory: NVIDIA Corporation Device 0d6f (rev a1) 00:01.3 RAM memory: NVIDIA Corporation Device 0d70 (rev a1) 00:02.0 RAM memory: NVIDIA Corporation Device 0d71 (rev a1) 00:02.1 RAM memory: NVIDIA Corporation Device 0d72 (rev a1) 00:03.0 ISA bridge: NVIDIA Corporation MCP89 LPC Bridge (rev a2) 00:03.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.2 SMBus: NVIDIA Corporation MCP89 SMBus (rev a1) 00:03.3 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.4 Co-processor: NVIDIA Corporation MCP89 Co-Processor (rev a1) 00:04.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:04.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:06.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:06.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:08.0 Audio device: NVIDIA Corporation MCP89 High Definition Audio (rev a2) 00:09.0 Ethernet controller: NVIDIA Corporation MCP89 Ethernet (rev a1) 00:0a.0 IDE interface: NVIDIA Corporation MCP89 SATA Controller (rev a2) 00:0b.0 RAM memory: NVIDIA Corporation Device 0d75 (rev a1) 00:15.0 PCI bridge: NVIDIA Corporation Device 0d9b (rev a1) 00:17.0 PCI bridge: NVIDIA Corporation MCP89 PCI Express Bridge (rev a1) 01:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) 02:00.0 VGA compatible controller: NVIDIA Corporation Device 08a0 (rev a2) There is a very similar bug on Launchpad which I have posted below. https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers/+bug/764195

    Read the article

  • Ubuntu's Lucid Lynx: Ubuntu's Most Innovative

    <b>Datamation:</b> "Ubuntu&#8217;s Lucid Lynx (Ubuntu 10.04) is still six weeks away from release. However, on the eve of the first beta release, the daily builds and news releases suggest that Lucid will be one of the most innovative versions of Ubuntu for several years."

    Read the article

  • Canonicalization issue regarding academic URL vs. blog URL

    - by user5395
    I'm sorry if what I am about to write is long-winded. I only wish to be clear. I am an academic in the scientific community. I maintain a web site for my research, teaching, and other professional activities. Until recently, the content for this site was hosted in a directory on my university department's own server. The address is of the typical form (universityname).edu/~(myusername) I decided that I wanted to use WordPress in order to host and manage my page. So I set up a WordPress.com blog and then replaced the index.html file in (universityname).edu/~(myusername) with a new one consisting of a single frame, containing the WordPress.com blog. Now when a user visits (universityname).edu/~(myusername), he or she sees the blog instead. This has been pretty nice because, even when the user clicks on links between pages or posts in the blog, the only thing showing up in the address bar of the browser is www.(universityname).edu/~(myusername), because the blog is constrained to a frame. However, the effect of this change on the search side of things has not been so kind to me. Before, when someone searched for my name in Google, the first result was always (universityname).edu/~(myusername). This is the most desirable outcome, for professional reasons. (Having my academic URL come up first suggests that I am an accredited professional, and not just some crank with a blog!) But now, Google seems to have canonicalized my web presence under the blog's WordPress.com address. It has completely forgotten about my academic URL and considers the WordPress.com address to be the best address representing me on the web. Unfortunately, WordPress.com doesn't support the canonical tag, so I can't tell the blog to advertise itself as my academic URL in the header. (It doesn't seem to help at all that I have used the WordPress.com dashboard to turn on no-indexing of the blog.) One obvious solution would be to use the departmental server to host my content again, and use a local installation of the WordPress platform. For reasons beyond my control, the platform will not be deployed on the departmental server at this time. Another solution would be to use shared hosting with WordPress.org support, because the WordPress.org platform does support the canonical tag (albeit via a plug-in). But this seems to usually require purchasing a domain name and other fees, and there is no guarantee that Google will listen to the canonical tag (it might use whatever domain name I end up with instead). Is there a way I can more cleverly integrate the WordPress.com blog into a page hosted on my department's server? Is there some PHP code I can write to retrieve the blog's contents in a way that Google won't treat as a link / "perceive" the blog? Please note: I am a PHP novice at best. I just feel there should be a simpler solution to all this, within the constraints of what I have described above. Thanks!

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • what's the point of method overloading?

    - by David
    I am following a textbook in which I have just come across method overloading. It briefly described method overloading as: when the same method name is used with different parameters its called method overloading. From what I've learned so far in OOP is that if I want different behaviors from an object via methods, I should use different method names that best indicate the behavior, so why should I bother with method overloading in the first place?

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • The Fellowship of the Ringwraiths [Video]

    - by Asian Angel
    While we all know what happened during the events of the first LOTR movie for the Fellowship, there were some unanswered questions about the Ringwraiths and their activities. Here finally is your opportunity to see what really happened… Fellowship of the Ringwraiths [via Neatorama] How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • I still think Twitter is dead &hellip; but

    - by Randy Walker
    Twitter finally hit the mainstream about 8 months ago, but I’ve been saying for a couple of years now, without a real way for the company to earn money, what’s the future fate of Twitter?  On the personal side, where is the real value for the users?  For the most part, Twitter has replaced most people’s IM (instant messaging), at least in the technology circles I run in.  It still has value for users as a communication tool.  But I see it more as a fad.  My prediction is over the next 6 months we’ll start seeing a usage drop (if we haven’t already started to see it). On the business side, how does Twitter make money?  It doesn’t.  If you use the text messaging capabilities, you see a few ads.  But most smart phone and PC users, won’t ever see them.  I still think Twitter has the best chance to make money by forcing the “collectors” to pay money.  You know what I mean by “collector”, those people that collect tons of followers or friends.  If Twitter caps the number of followers and makes you pay to have more, would you?  The normal twitter user doesn’t have that many followers, and this is where my title comes in … BUT The financial value for Twitter is really seen through businesses connecting with their customers.  I’ve seen 3 effective ways this has been accomplished. 1. Giving your customers a coupon or announcing a sale My favorite is @amazonmp3, Being a huge music lover, I get notified when they put music on sale. Various restaurants like @ruthschris_ARK will let their favorite customers know about certain specials @BluefinMemphis I was traveling through Memphis once looking for a sushi restaurant when they had %50 off if we mentioned we saw them on Twitter.  It was their first attempt at trying to encourage customers in the door, and after talking with the management, it was a huge success 2. Giveaways @namecheap Several companies have started huge marketing campaigns, but my favorite is watching companies post trivia questions, and the first person to respond wins a prize. 3. Responding to Customer Complaints I once posted a complaint about American Express (a company that I have slowly come to really dislike) but they actually had someone contact me to try and resolve the issue.  I give them credit for paying attention, but still dislike them for their horrible credit practices.

    Read the article

  • Make Firefox Show Google Results for Default Address Bar Searches

    - by The Geek
    Have you ever typed something incorrectly into the Firefox address bar, and then had it take you to a page you weren’t expecting? The reason is because Firefox uses Google’s “I’m Feeling Lucky” search, but you can change it. Scratching your head? Let’s take a quick run through what we’re talking about… Normally, if you typed in something like just “howtogeek” in the address bar, and then hit enter… you’ll be taken directly to the How-To Geek site. But how? Very simple! It’s the same place you would have been taken to if you typed “howtogeek” into Google, and then clicked the “I’m Feeling Lucky” button, which takes you to the first result. This is what Firefox does behind the scenes when you put something into the address bar that isn’t a URL. But what if you’d rather head to the search results page instead? Luckily, all you have to do is tweak an about:config parameter in Firefox. Just head into about:config in the address bar, and then filter for keyword.url like so: Double-click on the entry in the list, and then delete the &gfns=1 from the value. That’s the part of the URL that triggers Google to redirect to the first result. And now, the next time you type something into the address bar, either on purpose or because you typo’d it, you’ll be taken to the results page instead: About:Config tweaking is lots of fun. Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same TabChange Default Feed Reader in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Speaking at Atlanta.MDF on March 12

    - by RickHeiges
    I am fortunate enough to be speaking to a user group with a really cool name - Atlanta.MDF (Microsoft Database Forum). Although I visit Atlanta often, it usually involves running from one councourse to another and rarely do I get the chance to visit the user group. I have made it to the user group on several occassions in the past, but it has been several years. This will be my first presentation to the group. I will be speaking about Database Consolidation - something I have been doing for years....(read more)

    Read the article

  • How to TDD test that objects are being added to a collection if the collection is private?

    - by Joshua Harris
    Assume that I planned to write a class that worked something like this: public class GameCharacter { private Collection<CharacterEffect> _collection; public void Add(CharacterEffect e) { ... } public void Remove(CharacterEffect e) { ... } public void Contains(CharacterEffect e) { ... } } When added an effect does something to the character and is then added to the _collection. When it is removed the effect reverts the change to the character and is removed from the _collection. It's easy to test if the effect was applied to the character, but how do I test that the effect was added to _collection? What test could I write to start constructing this class. I could write a test where Contains would return true for a certain effect being in _collection, but I can't arrange a case where that function would return true because I haven't implemented the Add method that is needed to place things in _collection. Ok, so since Contains is dependent on having Add working, then why don't I try to create Add first. Well for my first test I need to try and figure out if the effect was added to the _collection. How would I do that? The only way to see if an effect is in _collection is with the Contains function. The only way that I could think to test this would be to use a FakeCollection that Mocks the Add, Remove, and Contains of a real collection, but I don't want _collection being affected by outside sources. I don't want to add a setEffects(Collection effects) function, because I do not want the class to have that functionality. The one thing that I am thinking could work is this: public class GameCharacter<C extends Collection> { private Collection<CharacterEffect> _collection; public GameCharacter() { _collection = new C<CharacterEffect>(); } } But, that is just silly making me declare what some private data structures type is on every declaration of the character. Is there a way for me to test this without breaking TDD principles while still allowing me to keep my collection private?

    Read the article

  • TUI - Oracle Exadata V2 implementation

    - by rene.kundersma
    With this post a video recorded by Oracle Benelux. The message is coming from the Senior ICT manager of TUI The Netherlands. I had the pleasure that Oracle Consulting The Netherlands allowed me to do the design, the implementation and actual migration of this first Dutch Exadata success story. Rene Kundersma Oracle Technology Services the Netherlands

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >