Search Results

Search found 21802 results on 873 pages for 'erx vb next coder'.

Page 465/873 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • Unable to SSH to EC2

    - by Walker
    I downloaded the cert-xxx.pem and pk-xxx.pem files and also the keypair.pem and moved it all to the /.ssh folder on my Ubuntu client machine. this is what I get when I try to SSH with -v at the end debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I am new to administering servers and I want to know if I should be trying to convert the pem files to id_rsa and id_dsa. I am not really sure if that is possible but I don't know how else to get the id_rsa, id_dsa from those pem files or if there is any work around. I managed to get access to EC2 the first time and this is my second try and I am unsuccessful so far. Any help is appreciated. regards Walker

    Read the article

  • Where's my MD.070?

    - by Dave Burke
    In a previous Blog entry titled “Where’s My MD.050” I discussed how the OUM Analysis Specification is the “new-and-improved” version of the more traditional Functional Design Document (or MD.050 for Oracle AIM stalwarts). In a similar way, the OUM Design Specification is an evolution of what we used to call the Technical Design Document (or MD.070). Let’s dig a little deeper…… In a traditional software development process, the “Design Task” would include all the time and resources required to design the software component(s), AND to create the final Technical Design Document. However, in OUM, we have created distinct Tasks for pure design work, along with an optional Task for pulling all of that work together into a Design Specification. Some of the Design Tasks shown above will result in their own Work Products (i.e. an Architecture Description), whilst other Tasks would act as “placeholders” for a specific work effort. In any event, the DS.140 Design Specification can include a combination of unique content, along with links to other Work Products, together which enable a complete technical description of the component, or solution, being designed. So next time someone asks “where’s my MD.070” the short answer would be to tell them to read the OUM Task description for DS.140 – Design Specification!

    Read the article

  • Connecting to a new installation of TFS 2010

    - by Enrique Lima
    When the installation and configuration for TFS 2010 is completed, the next step is to connect and use TFS.  There is a Web Access component, but in order for it to serve useful you need to create a project into the Team Project Collection.  This is where Visual Studio 2010 comes in. Open Visual Studio 2010, then click on the Team Explorer Tab (red arrow pointing to it) or go to View > Team Explorer. Once there, click the Connect to Team Project toolbar button This will open up the Connect to Team Project dialog, click on Servers … On the Add/Remove Team Foundation Server dialog, click Add … On the Add Team Foundation Server, enter the name of your server and click ok. If you are prompted for credentials, provide the credentials needed. Once accepted, the server will be listed on the Add/Remove Team Foundations Server dialog, click close. You will be back at the Connect to Team Project dialog, assuming you have one Collection, click Connect. (In the event you have more than one project collection, select the appropriate collection and then click Connect) Your Team Explorer tab will look something like the image below.

    Read the article

  • Email attachments sent to a group don't show up for some

    - by blsub6
    My boss sent out an email from my Exchange 2010 org and attached a PDF and a Word doc to it. He came back the next day and told me that some of the 8 or 10 people that received this email could open up the attachments no problem. The other 2 or 3 people, could not. One of these people who could not open the attachment, went so far as to call Comcast (his email service provider) and ask them where his attachments went. Comcast told this person that when they received the email, the attachment was 0 bytes in size. This may sound like more of a rant than a question but I'm genuinely concerned. Is there any possible way that something could have gone wrong on my end that sent out the email to some with the attachment and to some without?

    Read the article

  • IIS will not install on Windows 7 Pro 64 bit

    - by Paul
    I have a new PC running Windows 7 Professional 64 bit. I have an issue installing IIS - it goes through the install process, but at the end tells me "not all components could be installed", with no additional information given. There is no sign of an error in the install log or in event viewer. However, at this point, IIS is installed and working! I can run IIS manager, browse to localhost and see the default page, but at the next reboot the system rolls back and th einstall vanishes. I have tried installing II using the Windows Components section in Add/Remove Programs, I have also tried the Web Platform Installer and using the command line, all with the same end result.

    Read the article

  • Communicator Messages not being saved to Outlook due to Outlook Integration error

    - by Mark Rogers
    For the most part my Office Communicator appears to be configured correctly. I can login to my work account and see the work contact list. Outlook is working perfectly had a weird profile problem initially but that was fixed. Unfortunately, even though I have set the setting that says: Save my instant message conversations in the Outlook Conversation History folder. My conversations have stopped saving to the Outlook Conversation History folder. Also I have a yellow warning message on top of the server icon next to the status field. When I hover or click on the message, it says there is an Outlook Integration Error. The administrator is having trouble figuring out what is causing it. What can cause Outlook Integration Errors in Communicator and how do I go about trouble shooting them?

    Read the article

  • Cygwin Python and Windows Ruby

    - by Cheezo
    I have a peculiar setup as follows: I have cygwin installed on a Windows 7 machine. I need execute a python script setup in cygwin from the windows CLI. This works fine : c:\cygwin\bin\python2.6.exe c:\cygwin\bin\python-script This python-script accesses a file: ~/.some_config_file which translates to /home/user-name when i execute it from Windows as above. So this works as expected. Now, the next step is to execute this python script from ruby(which is setup on Windows natively w/o Cygwin). When i execute the script from ruby, the ~/.some_config_file translates to /cygdrive/c/Users/user-name instead of the expected /home/user-name leading to the script failing. I understand that something in the environment, PATH etc needs to be set correctly although i cannot seem to find what exactly.

    Read the article

  • How I understood monads, part 1/2: sleepless and self-loathing in Seattle

    - by Bertrand Le Roy
    For some time now, I had been noticing some interest for monads, mostly in the form of unintelligible (to me) blog posts and comments saying “oh, yeah, that’s a monad” about random stuff as if it were absolutely obvious and if I didn’t know what they were talking about, I was probably an uneducated idiot, ignorant about the simplest and most fundamental concepts of functional programming. Fair enough, I am pretty much exactly that. Being the kind of guy who can spend eight years in college just to understand a few interesting concepts about the universe, I had to check it out and try to understand monads so that I too can say “oh, yeah, that’s a monad”. Man, was I hit hard in the face with the limitations of my own abstract thinking abilities. All the articles I could find about the subject seemed to be vaguely understandable at first but very quickly overloaded the very few concept slots I have available in my brain. They also seemed to be consistently using arcane notation that I was entirely unfamiliar with. It finally all clicked together one Friday afternoon during the team’s beer symposium when Louis was patient enough to break it down for me in a language I could understand (C#). I don’t know if being intoxicated helped. Feel free to read this with or without a drink in hand. So here it is in a nutshell: a monad allows you to manipulate stuff in interesting ways. Oh, OK, you might say. Yeah. Exactly. Let’s start with a trivial case: public static class Trivial { public static TResult Execute<T, TResult>( this T argument, Func<T, TResult> operation) { return operation(argument); } } This is not a monad. I removed most concepts here to start with something very simple. There is only one concept here: the idea of executing an operation on an object. This is of course trivial and it would actually be simpler to just apply that operation directly on the object. But please bear with me, this is our first baby step. Here’s how you use that thing: "some string" .Execute(s => s + " processed by trivial proto-monad.") .Execute(s => s + " And it's chainable!"); What we’re doing here is analogous to having an assembly chain in a factory: you can feed it raw material (the string here) and a number of machines that each implement a step in the manufacturing process and you can start building stuff. The Trivial class here represents the empty assembly chain, the conveyor belt if you will, but it doesn’t care what kind of raw material gets in, what gets out or what each machine is doing. It is pure process. A real monad will need a couple of additional concepts. Let’s say the conveyor belt needs the material to be processed to be contained in standardized boxes, just so that it can safely and efficiently be transported from machine to machine or so that tracking information can be attached to it. Each machine knows how to treat raw material or partly processed material, but it doesn’t know how to treat the boxes so the conveyor belt will have to extract the material from the box before feeding it into each machine, and it will have to box it back afterwards. This conveyor belt with boxes is essentially what a monad is. It has one method to box stuff, one to extract stuff from its box and one to feed stuff into a machine. So let’s reformulate the previous example but this time with the boxes, which will do nothing for the moment except containing stuff. public class Identity<T> { public Identity(T value) { Value = value; } public T Value { get; private set;} public static Identity<T> Unit(T value) { return new Identity<T>(value); } public static Identity<U> Bind<U>( Identity<T> argument, Func<T, Identity<U>> operation) { return operation(argument.Value); } } Now this is a true to the definition Monad, including the weird naming of the methods. It is the simplest monad, called the identity monad and of course it does nothing useful. Here’s how you use it: Identity<string>.Bind( Identity<string>.Unit("some string"), s => Identity<string>.Unit( s + " was processed by identity monad.")).Value That of course is seriously ugly. Note that the operation is responsible for re-boxing its result. That is a part of strict monads that I don’t quite get and I’ll take the liberty to lift that strange constraint in the next examples. To make this more readable and easier to use, let’s build a few extension methods: public static class IdentityExtensions { public static Identity<T> ToIdentity<T>(this T value) { return new Identity<T>(value); } public static Identity<U> Bind<T, U>( this Identity<T> argument, Func<T, U> operation) { return operation(argument.Value).ToIdentity(); } } With those, we can rewrite our code as follows: "some string".ToIdentity() .Bind(s => s + " was processed by monad extensions.") .Bind(s => s + " And it's chainable...") .Value; This is considerably simpler but still retains the qualities of a monad. But it is still pointless. Let’s look at a more useful example, the state monad, which is basically a monad where the boxes have a label. It’s useful to perform operations on arbitrary objects that have been enriched with an attached state object. public class Stateful<TValue, TState> { public Stateful(TValue value, TState state) { Value = value; State = state; } public TValue Value { get; private set; } public TState State { get; set; } } public static class StateExtensions { public static Stateful<TValue, TState> ToStateful<TValue, TState>( this TValue value, TState state) { return new Stateful<TValue, TState>(value, state); } public static Stateful<TResult, TState> Execute<TValue, TState, TResult>( this Stateful<TValue, TState> argument, Func<TValue, TResult> operation) { return operation(argument.Value) .ToStateful(argument.State); } } You can get a stateful version of any object by calling the ToStateful extension method, passing the state object in. You can then execute ordinary operations on the values while retaining the state: var statefulInt = 3.ToStateful("This is the state"); var processedStatefulInt = statefulInt .Execute(i => ++i) .Execute(i => i * 10) .Execute(i => i + 2); Console.WriteLine("Value: {0}; state: {1}", processedStatefulInt.Value, processedStatefulInt.State); This monad differs from the identity by enriching the boxes. There is another way to give value to the monad, which is to enrich the processing. An example of that is the writer monad, which can be typically used to log the operations that are being performed by the monad. Of course, the richest monads enrich both the boxes and the processing. That’s all for today. I hope with this you won’t have to go through the same process that I did to understand monads and that you haven’t gone into concept overload like I did. Next time, we’ll examine some examples that you already know but we will shine the monadic light, hopefully illuminating them in a whole new way. Realizing that this pattern is actually in many places but mostly unnoticed is what will enable the truly casual “oh, yes, that’s a monad” comments. Here’s the code for this article: http://weblogs.asp.net/blogs/bleroy/Samples/Monads.zip The Wikipedia article on monads: http://en.wikipedia.org/wiki/Monads_in_functional_programming This article was invaluable for me in understanding how to express the canonical monads in C# (interesting Linq stuff in there): http://blogs.msdn.com/b/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx

    Read the article

  • Community Events and Workshops in November 2012 #ssas #tabular #powerpivot

    - by Marco Russo (SQLBI)
    I and Alberto have a busy agenda until the end of the month, but if you are based in Northern Europe there are many chance to meet one of us in the next couple of weeks! Belgium, 20 November 2012 – SQL Server Days 2012 with Marco Russo I will present two sessions in this conference, “Data Modeling for Tabular” and “Querying and Optimizing DAX” Copenhagen, 21-22 November, 2012 – SSAS Tabular Workshop with Alberto Ferrari Alberto will be the speaker for 2 days – you can still register if you want a full immersion! Copenhagen, 21 November 2012 – Free Community Event with Alberto Ferrari (hosted in Microsoft Hellerup) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Munich, 27-28 November 2012 - SSAS Tabular Workshop with Alberto Ferrari The SSAS workshop will run also in Germany, this time in Munich. Also here there is still some seat still available. Munich, 27 November 2012 - Free Community Event with Alberto Ferrari (hosted in Microsoft ) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Moscow, 27-28 November 2012 – TechEd Russia 2012 with Marco Russo I will speak during the keynote on November 27 and I will present two session the day after, “Developing an Analysis Services Tabular Project BI Semantic Model” and “Excel 2013 PowerPivot in Action” Stockholm, 29-30 November 2012 - SSAS Tabular Workshop with Marco Russo I will run this workshop in Stockholm – if you want to register here, hurry up! Few seats still available! Stockholm, 29 November 2012 - Free Community Event (sold-out!) with Marco Russo In the evening I will present “Excel 2013 PowerPivot in Action” If you want to attend a SSAS Tabular Workshop online, you can also register to the Online edition of December 5-6, 2012, which is still in early bird and is scheduled with a friendly time zone for America’s countries (which could be good for Europe too, in case you don’t mind attending a workshop until midnight!).

    Read the article

  • Understanding how Tracert works

    - by iridescent
    From what I gathered so far, Tracert works by sending 3 ICMP echo messages. Starting with a TTL value of 1. For each router the packet encounters, the TTL value will be decremented. For the 1st router, 1-1 = 0, so an ICMP "time exceeded" message will be sent back to the sender machine. Next, the TTL value will be incremented to 2 by the sender machine and the cycle repeats for the 2nd router (2--1--0) and so on. Please correct me if my undestanding is flawed. I am curious as to why the ICMP "time exceeded" message isn't displayed by Tracert in Command Prompt since it is in fact an error message ? The cycle simply proceeds on. Thanks.

    Read the article

  • New Fusion CRM Webinars for Partners dates and subjects announced

    - by Richard Lefebvre
    New Fusion CRM Weekly webinars dates and subjects have been announced! Visit our microsite to find out the sessions to come and mark them in your agenda. The next session will take place Monday April the 2nd at 3pm GMT / 4pm CET and will address the Fusion CRM Sales Planning  In order to check the complete agenda and see login-details, please visit our dedicated microsite. How to join the dedicated microsite: Click on http://isdportal.oracle.com/isd_html/sf.htm Enter your Email Address in the corresponding field Enter fusion_crm in the “Access URL/Page Token” field Agenda: The list of sessions is published and will be regularly updated in the microsite. Duration: Each session lasts up to 60 minutes Webex: The respective webinar link and session ID are published in the microsite Audio:  The audio call details (telephone numbers by country, call number and password) is indicated in the microsite Slides: For your convenience, a pdf copy of each presentation will be stored in the microsite’s document section. We hope that this series of webcasts will be instrumental to your way of Fusion CRM business success!  For further information please contact me at [email protected]

    Read the article

  • Render rivers in a grid.

    - by Gabriel A. Zorrilla
    I have created a random height map and now i want to create rivers. I've made an algorithm based on a* to make rivers flow from peaks to sea and now i'm in the quest of figuring out an elegant algorithm to render them. It's a 2D, square, mapgrid. The cells which the river pases has a simple integer value with this form :rivernumber && pointOrder. Ie: 10, 11, 12, 13, 14, 15, 16...1+N for the first river, 20,21,22,23...2+N for the second, etc. This is created in the map grid generation time and it's executed just once, when the world is generated. I wanted to treat each river as a vector, but there is a problem, if the same river has branches (because i put some noise to generate branches), i can not just connect the points in order. The second alternative is to generate a complex algorithm where analizes each point, checks if the next is not a branch, if so trigger another algorithm that take care of the branch then returns to the main river, etc. Very complex and inelegant. Perhaps there is a solution in the world generation algorithm or in the river rendering algorithm that is commonly used in these cases and i'm not aware of. Any tips? Thanks!!

    Read the article

  • Ubuntu backlight problem with Nvidia graphics

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grug acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song)

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Git: Fixing a bug affecting two branches

    - by Aram Kocharyan
    I'm basing my Git repo on http://nvie.com/posts/a-successful-git-branching-model/ and was wondering what happens if you have this situation: Say I'm developing on two feature branches A and B, and B requires code from A. The X node introduces an error in feature A which affects branch B, but this is not detected at node Y where feature A and B were merged and testing was conducted before branching out again and working on the next iteration. As a result, the bug is found at node Z by the people working on feature B. At this stage it's decided that a bugfix is needed. This fix should be applied to both features, since the people working on feature A also need the bug fixed, since its part of their feature. Should a bugfix branch be created from the latest feature A node (the one branching from node Y) and then merged with feature A? After which both features are merged into develop again and tested before branching out? The problem with this is that it requires both branches to merge to fix the issue. Since feature B doesn't touch code in feature A, is there a way to change the history at node Y by implementing the fix and still allowing the feature B branch to remain unmerged yet have the fixed code from feature A? Mildly related: Git bug branching convention

    Read the article

  • Copy and Paste without needing to use the Mouse

    - by Paul Farry
    In previous versions of windows when your were Copying Files (Ctrl-C) then alt-Tab (to the appropriate window) and Pasting (Ctrl-V) using the Keyboard everything could be driven by the keyboard. With Vista and 7, it seems if you are copying and pasting (and the files exist) in the Destination Location you have to Click the Copy and Replace, there doesn't seem to be a way to do it... "Do this for the next {n} Conflicts" can be driven by Alt-D, but the "Copy and Replace" and "Don't Copy" are not. Or is there and have I just missed something? I'm more than happy to change registry or something to enable Keyboard shortcuts for this.

    Read the article

  • This is the End of Business as Usual...

    - by Michael Snow
    This week, we'll be hosting our last Social Business Thought Leader Series Webcast for 2012. Our featured guest this week will be Brian Solis of Altimeter Group. As we've been going through the preparations for Brian's webcast, it became very clear that an hour's time is barely scraping the surface of the depth of Brian's insights and analysis. Accordingly, in the spirit of sharing Brian's perspective for all of our readers, we'll be featuring guest posts all this week pulled from Brian's larger collection of blog postings on his own website. If you like what you've read here this week, we highly recommend digging deeper into his tome of wisdom. Guest Post by Brian Solis, Analyst, Altimeter Group as originally featured on his site with the minor change of the video addition at the beginning of the post. This is the End of Business as Usual and the Beginning of a New Era of Relevance - Brian Solis, Principal Analyst, Altimeter Group The Times They Are A-Changin’ Come gather ’round people Wherever you roam And admit that the waters Around you have grown And accept it that soon You’ll be drenched to the bone If your time to you Is worth savin’ Then you better start swimmin’ Or you’ll sink like a stone For the times they are a-changin’. - Bob Dylan I’m sure you are wondering why I chose lyrics to open this article. If you skimmed through them, stop here for a moment. Go back through the Dylan’s words and take your time. Carefully read, and feel, what it is he’s saying and savor the moment to connect the meaning of his words to the challenges you face today. His message is as important and true today as it was when they were first written in 1964. The tide is indeed once again turning. And even though the 60s now live in the history books, right here, right now, Dylan is telling us once again that this is our time to not only sink or swim, but to do something amazing. This is your time. This is our time. But, these times are different and what comes next is difficult to grasp. How people communicate. How people learn and share. How people make decisions. Everything is different now. Think about this…you’re reading this article because it was sent to you via email. Yet more people spend their online time in social networks than they do in email. Duh. According to Nielsen, of the total time spent online 22.5% are connecting and communicating in social networks. To put that in perspective, the time spent in the likes of Facebook, Twitter, and Youtube is greater than online gaming at 9.8%, email at 7.6% and search at 4%. Imagine for a moment if you and I were connected to one another in Facebook, which just so happens to be the largest social network in the world. How big? Well, Facebook is the size today of the entire Internet in 2004. There are over 1 billion people friending, Liking, commenting, sharing, and engaging in Facebook…that’s roughly 12% of the world’s population. Twitter has over 200 million users. Ever hear of tumblr? More time is spent on this popular microblogging community than Twitter. The point is that the landscape for communication and all that’s affected by human interaction is profoundly different than how you and I learned, shared or talked to one another yesterday. This transformation is only becoming more pervasive and, it’s not going back. Survival of the Fitting But social media is just one of the channels we can use to reach people. I must be honest. I’m as much a part of tomorrow as I am of yesteryear. It’s why I spend all of my time researching the evolution of media and its impact on business and culture. Because of you, I share everything I learn in newsletters, emails, blogs, Youtube videos, and also traditional books. I’m dedicated to helping everyone not only understand, but grasp the change that’s before you. Technologies such as social, mobile, virtual, augmented, et al compel us adapt our story and value proposition and extend our reach to be part of communities we don’t realize exist. The people who will keep you in business or running tomorrow are the very people you’re not reaching today. Before you continue to read on, allow me to clarify my point of view. My inspiration for writing this is to help you augment, not necessarily replace, the programs you’re running today. We must still reach those whom matter to us in the ways they prefer to be engaged. To reach what I call the connected consumer of Geneeration-C we must too reach them in the ways they wish to be engaged. And in all of my work, how they connect, talk to one another, influence others, and make decisions are not at all like the traditional consumers of the past. Nor are they merely the kids…the Millennial. Connected consumers are representative across every age group and demographic. As you can see, use of social networks, media sharing sites, microblogs, blogs, etc. equally span across Gen Y, Gen X, and Baby Boomers. The DNA of connected customers is indiscriminant of age or any other demographic for that matter. This is more about psychographics, the linkage of people through common interests (than it is their age, gender, education, nationality or level of income. Once someone is introduced to the marvels of connectedness, the sensation becomes a contagion. It touches and affects everyone. And, that’s why this isn’t going anywhere but normalcy. Social networking isn’t just about telling people what you’re doing. Nor is it just about generic, meaningless conversation. Today’s connected consumer is incredibly influential. They’re connected to hundreds and even thousands of other like-minded people. What they experiences, what they support, it’s shared throughout these networks and as information travels, it shapes and steers impressions, decisions, and experiences of others. For example, if we revisit the Nielsen research, we get an idea of just how big this is becoming. 75% spend heavily on music. How does that translate to the arts? I’d imagine the number is equally impressive. If 53% follow their favorite brand or organization, imagine what’s possible. Just like this email list that connects us, connections in social networks are powerful. The difference is however, that people spend more time in social networks than they do in email. Everything begins with an understanding of the “5 W’s and H.E.” – Who, What, When, Where, How, and to What Extent? The data that comes back tells you which networks are important to the people you’re trying to reach, how they connect, what they share, what they value, and how to connect with them. From there, your next steps are to create a community strategy that extends your mission, vision, and value and it align it with the interests, behavior, and values of those you wish to reach and galvanize. To help, I’ve prepared an action list for you, otherwise known as the 10 Steps Toward New Relevance: 1. Answer why you should engage in social networks and why anyone would want to engage with you 2. Observe what brings them together and define how you can add value to the conversation 3. Identify the influential voices that matter to your world, recognize what’s important to them, and find a way to start a dialogue that can foster a meaningful and mutually beneficial relationship 4. Study the best practices of not just organizations like yours, but also those who are successfully reaching the type of people you’re trying to reach – it’s benching marking against competitors and benchmarking against undefined opportunities 5. Translate all you’ve learned into a convincing presentation written to demonstrate tangible opportunity to your executive board, make the case through numbers, trends, data, insights – understanding they have no idea what’s going on out there and you are both the scout and the navigator (start with a recommended pilot so everyone can learn together) 6. Listen to what they’re saying and develop a process to learn from activity and adapt to interests and steer engagement based on insights 7. Recognize how they use social media and innovate based on what you observe to captivate their attention 8. Align your objectives with their objectives. If you’re unsure of what they’re looking for…ask 9. Invest in the development of content, engagement 10. Build a community, invest in values, spark meaningful dialogue, and offer tangible value…the kind of value they can’t get anywhere else. Take advantage of the medium and the opportunity! The reality is that we live and compete in a perpetual era of Digital Darwinism, the evolution of consumer behavior when society and technology evolve faster than our ability to adapt. This is why it’s our time to alter our course. We must connect with those who are defining the future of engagement, commerce, business, and how the arts are appreciated and supported. Even though it is the end of business as usual, it is the beginning of a new age of opportunity. The consumer revolution is already underway, and the question is: How do you better understand the role you play in this production as a connected or social consumer as well as business professional? Again, this is your time to define a new era of engagement and relevance. Originally written for The National Arts Marketing Project Connect with Brian via: Twitter | LinkedIn | Facebook | Google+ --- Note from Michael: If you really like this post above - check out Brian's TEDTalk and his thought process for preparing it in this post: 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} http://www.briansolis.com/2012/10/tedtalk-reinventing-consumer-capitalism-screw-business-as-usual/

    Read the article

  • Building Web Applications with ACT and jQuery

    - by dwahlin
    My second talk at TechEd is focused on integrating ASP.NET AJAX and jQuery features into websites (if you’re interested in Silverlight you can download code/slides for that talk here). The content starts out by discussing ScriptManager features available in ASP.NET 3.5 and ASP.NET 4 and provides details on why you should consider using a Content Delivery Network (CDN).  If you’re running an external facing site then checking out the CDN features offered by Microsoft or Google is definitely recommended. The talk also goes into the process of contributing to the Ajax Control Toolkit as well as the new Ajax Minifier tool that’s available to crunch JavaScript and CSS files. The extra fun starts in the next part of the talk which details some of the work Microsoft is doing with the jQuery team to donate template, globalization and data linking code to the project. I go into jQuery templates, data linking and a new globalization option that are all being worked on. I want to thank Stephen Walther, Dave Reed and James Senior for their thoughts and contributions since some of the topics covered are pretty bleeding edge right now.The slides and sample code for the talk can be downloaded below.     Download Slides and Samples

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • Code Metrics: Number of IL Instructions

    - by DigiMortal
    In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions. NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC. What are IL instructions? When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them. As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example. How much instructions there are? I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007. As average I have about 7 IL instructions per line of code. This is not metric you should use, it is just illustrative example so you can see the differences between numbers of lines and IL instructions. Why should I measure the number of IL instructions? Just take a look at chart above. Compiler does something that you cannot see – it compiles your code to IL. This is not intuitive process because you usually cannot say what is exactly the end result. You know it at greater plain but you don’t know it exactly. Therefore we can expect some surprises and that’s why we should measure the number of IL instructions. By example, you may find better solution for some method in your source code. It looks nice, it works nice and everything seems to be okay. But on server under load your fix may be way slower than previous code. Although you minimized the number of lines of code it ended up with increasing the number of IL instructions. How to measure the number of IL instructions? My choice is NDepend because Visual Studio is not able to measure this metric. Steps to make are easy. Open your NDepend project or create new and add all your application assemblies to project (you can also add Visual Studio solution to project). Run project analysis and wait until it is done. You can see over-all stats form global summary window. This is the same window I used to read the LoC and the number of IL instructions metrics for my chart. Meanwhile I made some changes to my code (enabled advanced caching for events and event registrations module) and then I ran code analysis again to get results for this section of this posting. NDepend is also able to tell you exactly what parts of code have problematically much IL instructions. The code quality section of CQL Query Explorer shows you how much problems there are with members in analyzed code. If you click on the line Methods too big (NbILInstructions) you can see all the problematic members of classes in CQL Explorer shown in image on right. In my case if have 10 methods that are too big and two of them have horrible number of IL instructions – just take a look at first two methods in this TOP10. Also note the query box. NDepend has easy and SQL-like query language to query code analysis results. You can modify these queries if you like and also you can define your own ones if default set is not enough for you. What is good result? As you can see from query window then the number of IL instructions per member should have maximally 200 IL instructions. Of course, like always, the less instructions you have, the better performing code you have. I don’t mean here little differences but big ones. By example, take a look at my first method in warnings list. The number of IL instructions it has is huge. And believe me – this method looks awful. Conclusion The number of IL instructions is useful metric when optimizing your code. For analyzing code at general level to find out too long methods you can use the number of LoC metric because it is more intuitive for you and you can therefore handle the situation more easily. Also you can use NDepend as code metrics tool because it has a lot of metrics to offer.

    Read the article

  • Better Embedded 2013

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2013/07/30/better-embedded-2013.aspx On July 8th and 9th I had a chance to attend and speak at the Better Embedded 2013 conference in Florence. Visiting Florence is always a pleasure, but having a chance to attend to such an interesting conference and to meet Marco Dal Pino, Paolo Patierno, Mirco Vanini and many other embedded developers made those two days an experience to be remembered. I did two sessions, one on Windows Embedded Standard and “PCs” usage in the embedded world and another one on Android for Embedded devices, you can find the slides on the better embedded website: www.betterembedded.it. You can also find slides for many other interesting session, ranging from the .NET microframework to Linux Embedded, from QT Quick to software licenses. Packing many different resources about embedded systems in a conference was not easy but the result is a very nice mix of contents ranging from firmware development to cloud-based systems. This is a great way to have an overview of what’s new or interesting in embedded systems and to get great ideas about how to build your new device. Don’t forget to follow @Better_Embedded on twitter to not miss next year conference! Thanks to the better embedded team for having allowed me to use some of the official pictures in this blog post. You can find a good selection of those pictures (just to experience the atmosphere of the conference) on its Facebook page: http://dvlr.it/DHDB

    Read the article

  • Grub not showing on startup for Windows 8.1 Ubuntu 13.10 Dual boot

    - by driftking96
    K im so a newbie to Ubuntu and i bought a Windows 8 pre-installed laptop last month. I updated to Windows 8.1 and then i thought about installing Ubuntu as a dual boot so i could mess around and learn more about it. So i followed a Youtube tutorial ( http://www.youtube.com/watch?v=dJfTvkgLqfQ ). And i got my stuff working fine. The first few times i booted i got the GRUB menu instead of my default HP Boot OS Manager, and i was able to select my OS. So i went to sleep and the next day i turned on my computer and the GRUB menu did not show up. I tried several times and it didnt automatically show up. In order for me to see the GRUB menu i had to turn on my PC and on start had to press ESC to pause startup and press F9 to get boot options. Then from there i had to pick from OS Boot, Ubuntu, Ubuntu (Yes there were two Ubuntus available) and a default EFI file thingy. When i click the first Ubuntu i get the GRUB Menu (I was too scared to try the second incase i screwed my laptop up) and i can safely load Ubuntu from there and use it (although i do have to increase my brightness everytime i load Ubuntu bec it somehow reduces my brightness to complete darkness on boot) So my problem here is why isnt my GRUB showing on boot, after it worked on the first day? I was on Windows 8.1 while typing this and if you have any questions or answers, i will happily answer or use them as a solution to the best of my abilities. BTW my laptop is a HP TouchSmart j-078CA.

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >