Search Results

Search found 27428 results on 1098 pages for 'copy local'.

Page 341/1098 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • How do I detect a file write error in C?

    - by rich
    I have an embedded environment where a user might insert or remove a USB flash drive. I would like to know if the drive has been removed, or if there is some other problem when I try to write to the drive. However, Linux just saves the information in its buffers and returns with no indicated error. The computer I'm using comes with a 2.4.26 kernel and libc 2.3.2. I'm mounting the drive this way: i = mount(MEMORY_DEV_PATH, MEMORY_MNT_PATH, "vfat", MS_SYNCHRONOUS, NULL); That works: 50:/root # mount /dev/scsi/host0/bus0/target0/lun0/part1 on /mem type vfat (rw,sync) 50:/root # Later, I try to copy a file to it: int ifile, ofile; ifile = open("/tmp/tmpmidi.mid", O_RDONLY); if (ifile < 0) { perror("open in"); break; } ofile = open(current_file_name.c_str(), O_WRONLY | O_SYNC); if (ofile < 0) { perror("open out"); break; } #define BUFSZ 256 char buffer[BUFSZ]; while (1) { i = read(ifile, buffer, BUFSZ); if (i < 0) { perror("read"); break; } j = write(ofile, buffer, i); if (j < 0) { perror("write"); break; } if (i != j) { perror("Sizes wrong"); break; } if (i < BUFSZ) { printf("Copy is finished, I hope\n"); close(ifile); close(ofile); break; } } If this snippet of code is executed with a write-protected USB memory, the result is Copy is finished, I hope amid a flurry of error messages from the kernel on the console. I believe the same thing would happen if I simply removed the USB drive (without unmounting it). I have also fiddled with devfs. I figured out how to get it to automatically mount the drive, (with the REGISTER event) but it never seems to trigger the UNREGISTER when I pull out the memory. How can I determine in my program whether I have successfully created a file? Update 4 July: It was a silly oversight of me not to check the result from close(). Unfortunately, the file can be closed without error. So that didn't help. What about fsync()? That sounds like a good idea, but that didn't catch the error either. There might be some interesting information in /sys if I had such a thing. I believe that didn't get added until 2.6.?. The comment(s) about the quality of my flash drive are probably justified. It's one of the earlier ones. In fact, write protect switches seem to be extremely rare these days. I think I have to use the overkill option: Create a file, unmount & remount the drive, and check to see if the file is there. If that doesn't solve my problem, then something is really messed up! Note to myself: Make sure the file you try to create isn't already there! By the way, this does happen to be a C++ program. You can tell by the .c_str() which I had intended to edit out for simplicity.

    Read the article

  • Bash Shell Scripting Errors: ./myDemo: 56: Syntax error: Unterminated quoted string [EDITED]

    - by ???
    Could someone take a look at this code and find out what's wrong with it? #!/bin/sh while : do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls $PWD ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ "$directory = "q" -o "Q" ] then exit 0 fi while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Re-enter the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 fi done printf ls "$directory" ;; b|B) echo "Please specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi while [ ! -f "$file" ] do echo "Usage: \"$file\" must be an ordinary file." echo "Re-enter the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi done cp "$file" "$file.bkup" ;; q|Q) exit 0 ;; esac echo done exit 0 There are some syntax errors that I can't figure out. However I should note that on this unix system echo -e doesn't work (don't ask me why I don't know and I don't have any sort of permissions to change it and even if I wouldn't be allowed to) Bash Shell Scripting Error: "./myDemo ./myDemo: line 62: syntax error near unexpected token done' ./myDemo: line 62: " [Edited] EDIT: I fixed the while statement error, however now when I run the script some things still aren't working correctly. It seems that in the b|B) switch statement cp $file $file.bkup doesn't actually copy the file to file.bkup ? In the a|A) switch statement ls "$directory" doesn't print the directory listing for the user to see ? #!/bin/bash while $TRUE do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls pwd ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ ! -d "$directory" ] then while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Specify the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 elif [ -d "$directory" ] then ls "$directory" else continue fi done fi ;; b|B) echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ ! -f "$file" ] then while [ ! -f "$file" ] do echo "Usage: "$file" must be an ordinary file." echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 elif [ -f "$file" ] then cp $file $file.bkup fi done fi ;; q|Q) exit 0 ;; esac echo done exit 0 Another thing... is there an editor that I can use to auto-parse code? I.e something similar to NetBeans?

    Read the article

  • Problem using delete[] (Heap corruption) when implementing operator+= (C++)

    - by Darel
    I've been trying to figure this out for hours now, and I'm at my wit's end. I would surely appreciate it if someone could tell me when I'm doing wrong. I have written a simple class to emulate basic functionality of strings. The class's members include a character pointer data (which points to a dynamically created char array) and an integer strSize (which holds the length of the string, sans terminator.) Since I'm using new and delete, I've implemented the copy constructor and destructor. My problem occurs when I try to implement the operator+=. The LHS object builds the new string correctly - I can even print it using cout - but the problem comes when I try to deallocate the data pointer in the destructor: I get a "Heap Corruption Detected after normal block" at the memory address pointed to by the data array the destructor is trying to deallocate. Here's my complete class and test program: #include <iostream> using namespace std; // Class to emulate string class Str { public: // Default constructor Str(): data(0), strSize(0) { } // Constructor from string literal Str(const char* cp) { data = new char[strlen(cp) + 1]; char *p = data; const char* q = cp; while (*q) *p++ = *q++; *p = '\0'; strSize = strlen(cp); } Str& operator+=(const Str& rhs) { // create new dynamic memory to hold concatenated string char* str = new char[strSize + rhs.strSize + 1]; char* p = str; // new data char* i = data; // old data const char* q = rhs.data; // data to append // append old string to new string in new dynamic memory while (*p++ = *i++) ; p--; while (*p++ = *q++) ; *p = '\0'; // assign new values to data and strSize delete[] data; data = str; strSize += rhs.strSize; return *this; } // Copy constructor Str(const Str& s) { data = new char[s.strSize + 1]; char *p = data; char *q = s.data; while (*q) *p++ = *q++; *p = '\0'; strSize = s.strSize; } // destructor ~Str() { delete[] data; } const char& operator[](int i) const { return data[i]; } int size() const { return strSize; } private: char *data; int strSize; }; ostream& operator<<(ostream& os, const Str& s) { for (int i = 0; i != s.size(); ++i) os << s[i]; return os; } // Test constructor, copy constructor, and += operator int main() { Str s = "hello"; // destructor for s works ok Str x = s; // destructor for x works ok s += "world!"; // destructor for s gives error cout << s << endl; cout << x << endl; return 0; }

    Read the article

  • Dot Net Nuke module works in "Edit" mode but not for "View": cache problem?

    - by Godeke
    I have a DNN task that simply runs some Javascript to compute a price based on a few input fields. This module works fine on our production site, but we had a company do a skin for us to improve the look of the site and the module fails under this new system. (DNN 05.06.00 (459) although it was 5.5 prior... I updated in a futile hope that it was a bug in the old revision.) What is incredibly odd about this is that the module works fine when I'm logged in to DNN and using the Edit mode as an administrator. In this case the small snippet of JavaScript loads fine and filling the fields results in a price. On the other hand it I click "View" (or more importantly, if I'm not logged in at all) the page loads a cached copy. Even odder, I have found the cache files in \Portals\2\Cache\Pages are generated and then only the cached data is being used. When the cached copy is loaded, the JavaScript doesn't appear (it is normally created via a Page.ClientScript.RegisterClientScriptBlock(). Additionally, the button which posts the data to the server doesn't execute any of the server side code (confirmed with a debugger) but instead just reloads the cached copy. If I manually delete the files in \Portals\2\Cache\Pages then everything works properly, but I have to do so after every page load: failing to do so simply loads the page as it was last generated repeatedly. Resetting the application (either via the UI or editing web.config) doesn't change this and clearing the cache from the Host Settings page doesn't actually clear these cached pages. I'm guessing that Edit mode bypasses the cache in some way, but I have gone as far as turning off all caching on the site (which is horrible for performance) and the cached version is still loaded. Has anyone seen anything like this? Shouldn't clearing the cache clear the files (I'm using the File provider for caching)? Shouldn't even a cached page go back to the server if the user posts back? EDIT: I should point out that permissions don't appear to be a problem on the cache directory... other pages cached output are deleted from this folder, just this page has this issue. EDIT 2: Clarifying some settings and conditions which I didn't provide. First, this module works fine in production under DNN 5.6.0. In our test environment with the consulting company's changes it fails (the changes are skin and page layout only in theory: the module source itself verifies as unchanged). All cache settings and the like have been verified the same between the two and we only resorted to setting the module cache to 0 and -1 (and disabling the test site's cache entirely) when we couldn't find another cause for the problem. I have watched the cache work correctly on many other pages in test: there is something about this page that is causing the problem. We have punted and are creating an installable skin based on the consultant's work as I suspect they have somehow corrupted the DNN install (database side I think).

    Read the article

  • Simplest way to get current time in current timezone using boost::date_time ?

    - by timday
    If I do date +%H-%M-%S on the commandline (Debian/Lenny), I get a user-friendly (not UTC, not DST-less, the time a normal person has on their wristwatch) time printed. What's the simplest way to obtain the same thing with boost::date_time ? If I do this: std::ostringstream msg; boost::local_time::local_date_time t = boost::local_time::local_sec_clock::local_time( boost::local_time::time_zone_ptr() ); boost::local_time::local_time_facet* lf( new boost::local_time::local_time_facet("%H-%M-%S") ); msg.imbue(std::locale(msg.getloc(),lf)); msg << t; Then msg.str() is an hour earlier than the time I want to see. I'm not sure whether this is because it's showing UTC or local timezone time without a DST correction (I'm in the UK). What's the simplest way to modify the above to yield the DST corrected local timezone time ? I have an idea it involves boost::date_time:: c_local_adjustor but can't figure it out from the examples.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • No attachable databases were found on the SQL Server

    - by George
    I'm fumbling my way through a basic installation of TFS 2010... I see that the TFS_Configuration and Tfs_DefaultCollection databases were successfully created on my local machine. The installation seemed to go smoothly, but it looks like I need to configure a Team Project Collect before I can start using it. Yet, the wizard seems unable to locate the appropriate database that I expected should have been created during the setup process. The server name defaulted properly to the name of the local machine. Did I miss a step somewhere?

    Read the article

  • WPF: Menu Items only bind command parameters once.

    - by Aran Mulholland
    Ive noticed this a couple of times when using menus with commands, they are not very dynamic, check this out. I am creating a menu from a collection of colours, I use it to colour a column in a datagrid. Anyway when i first bring up the menu (its a context menu) the command parameter binding happens and it binds to the column that the context menu was opened on. However the next time i bring it up it seems wpf caches the menu and it doesnt rebind the command parameter. so i can set the colour only on the initial column that the context menu appeared on. I have got around this situation in the past by making the menu totally dynamic and destroying the collection when the menu closed and forcing a rebuild the next time it opened, i dont like this hack. anyone got a better way? <MenuItem Header="Colour" ItemsSource="{Binding RelativeSource={RelativeSource AncestorType={x:Type local:ResultEditorGrid}}, Path=ColumnColourCollection}" ItemTemplate="{StaticResource colourHeader}" > <MenuItem.Icon> <Image Source="{StaticResource ColumnShowIcon16}" /> </MenuItem.Icon> <MenuItem.ItemContainerStyle> <Style TargetType="MenuItem" BasedOn="{StaticResource systemMenuItemStyle}"> <!--Warning dont change the order of the following two setters otherwise the command parameter gets set after the command fires, not mush use eh?--> <Setter Property="CommandParameter"> <Setter.Value> <MultiBinding> <MultiBinding.Converter> <local:ColumnAndColourMultiConverter/> </MultiBinding.Converter> <Binding RelativeSource="{RelativeSource AncestorType={x:Type DataGridColumnHeader}}" Path="Column"/> <Binding Path="."/> </MultiBinding> </Setter.Value> </Setter> <Setter Property="Command" Value="{Binding RelativeSource={RelativeSource AncestorType={x:Type local:ResultEditorGrid}}, Path=ColourColumnCommand}" /> </Style> </MenuItem.ItemContainerStyle> </MenuItem>

    Read the article

  • Start a process as LocalSystem using ProcessStartInfo

    - by auhorn
    I am trying to start a process as the LocalSystem account using this code ProcessStartInfo _startInfo = new ProcessStartInfo(commandName); _startInfo.UseShellExecute = false; _startInfo.UserName = @"NT AUTHORITY\SYSTEM"; _startInfo.CreateNoWindow = true; _startInfo.Arguments = argument; _startInfo.RedirectStandardOutput = true; using (Process _p = Process.Start(_startInfo)) { _retVal = _p.StandardOutput.ReadToEnd(); _p.WaitForExit(); } But I am getting always the same error message saying "Logon failure: unknown user name or bad password". The user calling the function is a local admin and should be able to start a process with local system privilege. I also tried different combination but no luck. I would appreciate any help. Thanks

    Read the article

  • How to access localhost websites through http request from blackberry simulator?

    - by SIA
    Hi Everybody I am developing a blackberry application and i wanted to access the websites from my localhost( local machine). I am running the application on blackberry simulator 8350. From my code i can browse request any website from internet and i am getting the response. When i am trying to give the url as localhost:8080/portal/index.php, its displaying me a erro message HTTP Error 404 description The requested resource (/portal/index.php) is not available. I am running my apache webserver on port 8080 over windows. How can i access my local machine website from blackberry simulator? Please help and guide me. Thanks SIA

    Read the article

  • Unable to run Wix Custom Action in MSI

    - by Grandpappy
    I'm trying to create a custom action for my Wix install, and it's just not working, and I'm unsure why. Here's the bit in the appropriate Wix File: <Binary Id="INSTALLERHELPER" SourceFile=".\Lib\InstallerHelper.dll" /> <CustomAction Id="HelperAction" BinaryKey="INSTALLERHELPER" DllEntry="CustomAction1" Execute="immediate" /> Here's the full class file for my custom action: using Microsoft.Deployment.WindowsInstaller; namespace InstallerHelper { public class CustomActions { [CustomAction] public static ActionResult CustomAction1(Session session) { session.Log("Begin CustomAction1"); return ActionResult.Success; } } } The action is run by a button press in the UI (for now): <Control Id="Next" Type="PushButton" X="248" Y="243" Width="56" Height="17" Default="yes" Text="!(loc.WixUINext)" > <Publish Event="DoAction" Value="HelperAction">1</Publish> </Control> When I run the MSI, I get this error in the log: MSI (c) (08:5C) [10:08:36:978]: Connected to service for CA interface. MSI (c) (08:4C) [10:08:37:030]: Note: 1: 1723 2: SQLHelperAction 3: CustomAction1 4: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp MSI (c) (08:4C) [10:08:38:501]: Product: SessionWorks :: Judge Edition -- Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Action ended 10:08:38: SQLHelperAction. Return value 3. DEBUG: Error 2896: Executing action SQLHelperAction failed. The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2896. The arguments are: SQLHelperAction, , Neither of the two error codes or messages it gives me is enough to tell me what's wrong. Or perhaps I'm just not understanding what they're saying is wrong. At first I thought it might be because I was using Wix 3.5, so just to be sure I tried using Wix 3.0, but I get the same error. Any ideas on what I'm doing wrong?

    Read the article

  • Sunrise / set calculations

    - by dassouki
    I'm trying to calculate the sunset / rise times using python based on the link provided below. My results done through excel and python do not match the real values. Any ideas on what I could be doing wrong? My Excel sheet can be found under .. http://transpotools.com/sun_time.xls # Created on 2010-03-28 # @author: dassouki # @source: [http://williams.best.vwh.net/sunrise_sunset_algorithm.htm][2] # @summary: this is based on the Nautical Almanac Office, United States Naval # Observatory. import math, sys class TimeOfDay(object): def calculate_time(self, in_day, in_month, in_year, lat, long, is_rise, utc_time_zone): # is_rise is a bool when it's true it indicates rise, # and if it's false it indicates setting time #set Zenith zenith = 96 # offical = 90 degrees 50' # civil = 96 degrees # nautical = 102 degrees # astronomical = 108 degrees #1- calculate the day of year n1 = math.floor( 275 * in_month / 9 ) n2 = math.floor( ( in_month + 9 ) / 12 ) n3 = ( 1 + math.floor( in_year - 4 * math.floor( in_year / 4 ) + 2 ) / 3 ) new_day = n1 - ( n2 * n3 ) + in_day - 30 print "new_day ", new_day #2- calculate rising / setting time if is_rise: rise_or_set_time = new_day + ( ( 6 - ( long / 15 ) ) / 24 ) else: rise_or_set_time = new_day + ( ( 18 - ( long/ 15 ) ) / 24 ) print "rise / set", rise_or_set_time #3- calculate sun mean anamoly sun_mean_anomaly = ( 0.9856 * rise_or_set_time ) - 3.289 print "sun mean anomaly", sun_mean_anomaly #4 calculate true longitude true_long = ( sun_mean_anomaly + ( 1.916 * math.sin( math.radians( sun_mean_anomaly ) ) ) + ( 0.020 * math.sin( 2 * math.radians( sun_mean_anomaly ) ) ) + 282.634 ) print "true long ", true_long # make sure true_long is within 0, 360 if true_long < 0: true_long = true_long + 360 elif true_long > 360: true_long = true_long - 360 else: true_long print "true long (360 if) ", true_long #5 calculate s_r_a (sun_right_ascenstion) s_r_a = math.degrees( math.atan( 0.91764 * math.tan( math.radians( true_long ) ) ) ) print "s_r_a is ", s_r_a #make sure it's between 0 and 360 if s_r_a < 0: s_r_a = s_r_a + 360 elif true_long > 360: s_r_a = s_r_a - 360 else: s_r_a print "s_r_a (modified) is ", s_r_a # s_r_a has to be in the same Quadrant as true_long true_long_quad = ( math.floor( true_long / 90 ) ) * 90 s_r_a_quad = ( math.floor( s_r_a / 90 ) ) * 90 s_r_a = s_r_a + ( true_long_quad - s_r_a_quad ) print "s_r_a (quadrant) is ", s_r_a # convert s_r_a to hours s_r_a = s_r_a / 15 print "s_r_a (to hours) is ", s_r_a #6- calculate sun diclanation in terms of cos and sin sin_declanation = 0.39782 * math.sin( math.radians ( true_long ) ) cos_declanation = math.cos( math.asin( sin_declanation ) ) print " sin/cos declanations ", sin_declanation, ", ", cos_declanation # sun local hour cos_hour = ( math.cos( math.radians( zenith ) ) - ( sin_declanation * math.sin( math.radians ( lat ) ) ) / ( cos_declanation * math.cos( math.radians ( lat ) ) ) ) print "cos_hour ", cos_hour # extreme north / south if cos_hour > 1: print "Sun Never Rises at this location on this date, exiting" # sys.exit() elif cos_hour < -1: print "Sun Never Sets at this location on this date, exiting" # sys.exit() print "cos_hour (2)", cos_hour #7- sun/set local time calculations if is_rise: sun_local_hour = ( 360 - math.degrees(math.acos( cos_hour ) ) ) / 15 else: sun_local_hour = math.degrees( math.acos( cos_hour ) ) / 15 print "sun local hour ", sun_local_hour sun_event_time = sun_local_hour + s_r_a - ( 0.06571 * rise_or_set_time ) - 6.622 print "sun event time ", sun_event_time #final result time_in_utc = sun_event_time - ( long / 15 ) + utc_time_zone return time_in_utc #test through main def main(): print "Time of day App " # test: fredericton, NB # answer: 7:34 am long = 66.6 lat = -45.9 utc_time = -4 d = 3 m = 3 y = 2010 is_rise = True tod = TimeOfDay() print "TOD is ", tod.calculate_time(d, m, y, lat, long, is_rise, utc_time) if __name__ == "__main__": main()

    Read the article

  • Tableview reload data problem iphone sdk

    - by neha
    Hi all, I have a class A which is a subclass of uitableviewcontroller and one more class B which actually displays my tableview with its content is a subclass of A. There's an xml parser which parses my xml and stores the content in an nsmutablearray of application delegate. Now, I fetch this delegate array into a local nsmutablearray in class B to minimise the communication between the two classes i.e. delegate and class B and display that. After certain condition is met in class A, I'm calling xml parser to refill the delegate array and I'm calling class B's tableview reload method. The problem is when I call the tableview's reload data, class B's delegate methods are called. But before that I need to grab this delegate array in local array in class B. How shall I do that? Can anybody please help? Thanx in advance.

    Read the article

  • Silverlight 4 MediaElement play sound

    - by NealWalters
    I converted a local sound file to a resource, which built this in my XAML: <UserControl.Resources> <my:Uri x:Key="SoundFiles">file:///c:/Audio/HebrewDemo/Shalom.wav</my:Uri> </UserControl.Resources> I did this by pasting a local disk mp3 filename into source, then clicked on the "dot" by source and chose "Extract Value to Resource". When I run, it tells me that "Uri" is not valid, and sure enough, in the Intellisense, I see other elements that start with "uri" but not just URI by itself. In the real world I want to specify a dynamic mp3 file name. For example, I might have a database of foreign language words used for flashcards, I want to play a sound file on a URL. But I thought I would try to walk before running...

    Read the article

  • How to configure Hudson and git plugin with an SSH key

    - by jlpp
    I've got Hudson (continuous integration system) with the git plugin running on a Tomcat Windows Service. msysgit is installed and the msysgit bin dir is in the path. PuTTY/Pageant/plink are installed and msysgit is configured to use them. When I run a job that attempts to clone the git repository I get the following error: $ git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" ERROR: Error cloning remote repo 'origin' : Could not clone git@hostname:project.git ERROR: Cause: Error performing git clone -o origin git@hostname:project.git e:\HUDSON_HOME\jobs\Project Trunk\workspace Trying next repository ERROR: Could not clone from a repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone Running git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" from the command line works without error. I've confirmed that my issue is not the same as http://stackoverflow.com/questions/1177292/hudson-git-clone-error because git is in the path and I don't get any error about the git executable on Hudson's Configure System page. This leads me to believe that the problem is that the user who owns the Tomcat/Hudson Windows service (Local System) has no SSH key set up to be able to clone the git repository. My question is, how can I set things up so that the git plugin/msysgit know to use a particular SSH key when trying to clone? I don't think Pageant will work because the Tomcat service is running as the "Local System" user, but I may be wrong. I have tried setting Pageant up as a service (using runassvc.exe), passing the appropriate key, and having it run as "Local System". The Tomcat/Hudson service doesn't seem to be able to see the key from the pageant service. Are there any other techniques for setting up a key? Thanks. EDIT: The discussion on http://n4.nabble.com/Hudson-with-git-and-ssh-td375633.html shows that someone else had a similar question. ssh-agent was suggested and this tool does come with msysgit but I'm not sure how to use it in conjunction with the Hudson service. Still, good clue if anyone can fill in the gaps. Thanks to Peter for the comment with the link. Also, the discussion on http://n4.nabble.com/questions-about-git-and-github-plug-ins-td383420.html starts off with the same question. I'm trying to resurrect that thread.

    Read the article

  • Setup Google Test (gtest) with Eclipse on OS X

    - by ejel
    What is the procedure to setup Google Test to work under Eclipse on Mac OS X? I followed the instruction in README to compile and install gtest as framework from XCode. Now I want to use gtest with Eclipse. Currently, it compiles fine but fails during build. I suppose Eclipse does not use framework concept as XCode does and need a different linking approach, but I'm not sure which files should I link to during build. g++ -L/usr/local/lib -L/usr/local/lib/libgtest.a -L/Library/Frameworks/gtest.framework -arch i386 -o "Raytracer" ./test/sample_test.o ./src/Raytracer.o Undefined symbols: "testing::Test::~Test()", referenced from: DemoTest_SANITY_Test::~DemoTest_SANITY_Test()in sample_test.o DemoTest_SANITY_Test::~DemoTest_SANITY_Test()in sample_test.o "testing::internal::AssertHelper::~AssertHelper()", referenced from: DemoTest_SANITY_Test::TestBody() in sample_test.o DemoTest_SANITY_Test::TestBody() in sample_test.o

    Read the article

  • ruby1.9.1 - sqlite3 problem on ubuntu 9.10 x64 (no such file to load -- sqlite3)

    - by doriath
    Hi, I have problem with sqlite3, because it is not working. irb(main):001:0> require 'sqlite3' LoadError: no such file to load -- sqlite3 from (irb):1:in `require' from (irb):1 from /usr/bin/irb:12:in `<main>' I have installed following packages: sudo apt-get install ruby1.9.1-full sudo apt-get install rubygems1.9.1 sudo gem update --system sudo apt-get install sqlite3 libsqlite3-dev sudo gem install sqlite3-ruby sudo apt-get install libopenssl-ruby1.9.1 The applications has following versions: $ ruby --version ruby 1.9.1p243 (2009-07-16 revision 24175) [x86_64-linux] $ sqlite3 --version 3.6.16 $ gem --version 1.3.6 and $ gem list --local *** LOCAL GEMS *** actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activeresource (2.3.5) activesupport (2.3.5) ffi (0.6.2) rack (1.0.1) rails (2.3.5) rake (0.8.7) rubygems-update (1.3.6) sqlite3-ruby (1.2.5) What have I missed?

    Read the article

  • windbg dv cmd fail - Private symbols (symbols.pri) are required for locals

    - by leif
    i have a C++ application compiled with VS 2008 with pdb file enabled. After i tried to use dv command to display local vars, it shows the following message: Unable to enumerate locals, HRESULT0x80004005 Private symbols (symbols.pri) are required for locals. Type ".hh dbgerr005" for details. Note that: i've run the "dv" command on the correct frame which has the symbol file. i can use "dt" command successfully. i've included the symbol path and the pdb file has been loaded successfully as following: start end module name 00400000 0043f000 helloworld (private pdb symbols) c:\test... Does anyone know the cause? Is there any configuration i missed to enable local var watch? Or VS 2008 pdb is not supported by windbg (i'm using the latest windbg version)?

    Read the article

  • Add Free Google Apps to Your Website or Blog

    - by Matthew Guay
    Would you like to have an email address from your own domain, but prefer Gmail’s interface and integration with Google Docs?  Here’s how you can add the free Google Apps Standard to your site and get the best of both worlds. Note: To signup for Google Apps and get it setup on your domain, you will need to be able to add info to your WordPress blog or change Domain settings manually. Getting Started Head to the Google Apps signup page (link below), and click the Get Started button on the right.  Note that we are signing up for the free Google Apps which allows a max of 50 users; if you need more than 50 email addresses for your domain, you can choose Premiere Edition instead for $50/year. Select that you are the Administrator of the domain, and enter the domain or subdomain you want to use with Google Apps.  Here we’re adding Google Apps to the techinch.com site, but we could instead add Apps to mail.techinch.com if needed…click Get Started. Enter your name, phone number, an existing email address, and other Administrator information.  The Apps signup page also includes some survey questions about your organization, but you only have to fill in the required fields. On the next page, enter a username and password for the administrator account.  Note that the user name will also be the administrative email address as [email protected]. Now you’re ready to authenticate your Google Apps account with your domain.  The steps are slightly different depending on whether your site is on WordPress.com or on your own hosting service or server, so we’ll show how to do it both ways.   Authenticate and Integrate Google Apps with WordPress.com To add Google Apps to a domain you have linked to your WordPress.com blog, select Change yourdomain.com CNAME record and click Continue. Copy the code under #2, which should be something like googleabcdefg123456.  Do not click the button at the bottom; wait until we’ve completed the next step.   Now, in a separate browser window or tab, open your WordPress Dashboard.  Click the arrow beside Upgrades, and select Domains from the menu. Click the Edit DNS link beside the domain name you’re adding to Google Apps. Scroll down to the Google Apps section, and paste your code from Google Apps into the verification code field.  Click Generate DNS records when you’re done. This will add the needed DNS settings to your records in the box above the Google Apps section.  Click Save DNS records. Now, go back to the Google Apps signup page, and click I’ve completed the steps above. Authenticate Google Apps on Your Own Server If your website is hosted on your own server or hosting account, you’ll need to take a few more steps to add Google Apps to your domain.  You can add a CNAME record to your domain host using the same information that you would use with a WordPress account, or you can upload an HTML file to your site’s main directory.  In this test we’re going to upload an HTML file to our site for verification. Copy the code under #1, which should be something like googleabcdefg123456.  Do not click the button at the bottom; wait until we’ve completed the next step first. Create a new HTML file and paste the code in it.  You can do this easily in Notepad: create a new document, paste the code, and then save as googlehostedservice.html.  Make sure to select the type as All Files or otherwise the file will have a .txt extension. Upload this file to your web server via FTP or a web dashboard for your site.  Make sure it is in the top level of your site’s directory structure, and try visiting it at yoursite.com/googlehostedservice.html. Now, go back to the Google Apps signup page, and click I’ve completed the steps above. Setup Your Email on Google Apps When this is done, your Google Apps account should be activated and ready to finish setting up.  Google Apps will offer to launch a guide to step you through the rest of the process; you can click Launch guide if you want, or click Skip this guide to continue on your own and go directly to the Apps dashboard.   If you choose to open the guide, you’ll be able to easily learn the ropes of Google Apps administration.  Once you’ve completed the tutorial, you’ll be taken to the Google Apps dashboard. Most of the Google Apps will be available for immediate use, but Email may take a bit more setup.  Click Activate email to get your Gmail-powered email running on your domain.    Add Google MX Records to Your Server You will need to add Google MX records to your domain registrar in order to have your mail routed to Google.  If your domain is hosted on WordPress.com, you’ve already made these changes so simply click I have completed these steps.  Otherwise, you’ll need to manually add these records before clicking that button.   Adding MX Entries is fairly easy, but the steps may depend on your hosting company or registrar.  With some hosts, you may have to contact support to have them add the MX records for you.  Our site’s host uses the popular cPanel for website administration, so here’s how we added the MX Entries through cPanel. Add MX Entries through cPanel Login to your site’s cPanel, and click the MX Entry link under Mail. Delete any existing MX Records for your domain or subdomain first to avoid any complications or interactions with Google Apps.  If you think you may want to revert to your old email service in the future, save a copy of the records so you can switch back if you need. Now, enter the MX Records that Google listed.  Here’s our account after we added all of the entries to our account. Finally, return to your Google Apps Dashboard and click the I have completed these steps button at the bottom of the page. Activating Service You’re now officially finished activating and setting up your Google Apps account.  Google will first have to check the MX records for your domain; this only took around an hour in our test, but Google warns it can take up to 48 hours in some cases. You may then see that Google is updating its servers with your account information.  Once again, this took much less time than Google’s estimate. When everything’s finished, you can click the link to access the inbox of your new Administrator email account in Google Apps. Welcome to Gmail … at your own domain!  All of the Google Apps work just the same in this version as they do in the public @gmail.com version, so you should feel right at home. You can return to the Google Apps dashboard from the Administrative email account by clicking the Manage this domain at the top right. In the Dashboard, you can easily add new users and email accounts, as well as change settings in your Google Apps account and add your site’s branding to your Apps. Your Google Apps will work just like their standard @gmail.com counterparts.  Here’s an example of an inbox customized with the techinch logo and a Gmail theme. Links to Remember Here are the common links to your Google Apps online.  Substitute your domain or subdomain for yourdomain.com. Dashboard https://www.google.com/a/cpanel/yourdomain.com Email https://mail.google.com/a/yourdomain.com Calendar https://www.google.com/calendar/hosted/yourdomain.com Docs https://docs.google.com/a/yourdomain.com Sites https://sites.google.com/a/yourdomain.com Conclusion Google Apps offers you great webapps and webmail for your domain, and let’s you take advantage of Google’s services while still maintaining the professional look of your own domain.  Setting up your account can be slightly complicated, but once it’s finished, it will run seamlessly and you’ll never have to worry about email or collaboration with your team again. Signup for the free Google Apps Standard Similar Articles Productive Geek Tips Mysticgeek Blog: Create Your Own Simple iGoogle GadgetAccess Your Favorite Google Services in Chrome the Easy WayRevo Uninstaller Pro [REVIEW]Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPFind Similar Websites in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • SQLAuthority Book Review – DBA Survivor: Become a Rock Star DBA

    - by pinaldave
    DBA Survivor: Become a Rock Star DBA – Thomas LaRock Link to Amazon Link to Flipkart First of all, I thank all my readers when I wrote that I could not get this book in any local book stores, because they offered me to send a copy of this good book. A very special mention goes to Sripada and Jayesh for they gave so much effort in finding my home address and sending me the hard copy. Before, I did not have the copy of the book, but now I have two of it already! It surprises me how my readers were able to find my home address, which I have not publicly shared. Quick Review: This is indeed a one easy-to-read and fun book. We all work day and night with technology yet we should not forget to show our love and care for our family at home. For our souls that starve for peace and guidance, this one book is the “it” book for all the technology enthusiasts. Though this book was specifically written for DBAs, the reach is not limited to DBAs only because the lessons incorporated in it actually applies to all. This is one of the most motivating technical books I have read. Detailed Review: Let us go over a few questions first: Who wants to be as famous as rockstars in the field of Database Administration? How can one learn what it takes to become a top notch software developer? If you are a beginner in your field, how will you go to next level? Your boss may be very kind or like Dilbert’s Boss, what will you do? How do you keep growing when Eco-system around you does not support you? You are almost at top but there is someone else at the TOP, what do you do and how do you avoid office politics? As a database developer what should be your basic responsibility? and many more… I was able to completely read book in one sitting and I loved it. Before I continue with my opinion, I want to echo the opinion of Kevin Kline who has written the Forward of the book. He has truly suggested that “You hold in your hands a collection of insights and wisdom on the topic of database administration gained through many years of hard-won experience, long nights of study, and direct mentorship under some of the industry’s most talented database professionals and information technology (IT) experts.” Today, IT field is getting bigger and better, while talking about terabytes of the database becomes “more” normal every single day. The gods and demigods of database professionals are taking care of these large scale databases and are carefully maintaining them. In this world, there are only a few beginnings on the first step. There are many experts in different technology fields who are asked to address the issues with databases. There is YOU and ME, who is just new to this work. So we ask ourselves WHERE to begin and HOW to begin. We adore and follow the religion of our rockstars, but oftentimes we really have no idea about their background and their struggles. Every rockstar has his success story which needs to be digested before learning his tricks and tips. This book starts with the same note and teaches the two most important lessons for anybody who wants to be a DBA Rockstar –  to focus on their single goal of learning and to excel the technology. The story starts with three simple guidelines – Get Prepared, Get Trained, Get Certified. Once a person learns the skills, and then, it would be about time that he needs to enrich or to improve those skills you have learned. I am sure that the right opportunity will come finding themselves and they will not have to go run behind it. However, the real challenge for any person is the first day or first week. A new employee, no matter how much experienced he is, sometimes has no clue about what should one do at new job. Chapter 2 and chapter 3 precisely talk about what one should do as soon as the new job begins. It is also written with keeping the fact in focus that each job can be very much different but there are few infrastructure setups and programming concepts are the same. Learning basics of database was really interesting. I like to focus on the roots of any technology. It is important to understand the structure of the database before suggesting what indexes needs to be created, the same way this book covers the most essential knowledge one must learn by most database developers. I think the title of the fourth chapter is my favorite sentence in this book. I can see that I will be saying this again and again in the future – “A Development Server Is a Production Server to a Developer“. I have worked in the software industry for almost 8 years now and I have seen so many developers sitting on their chairs and waiting for instructions from their lead about how to improve the code or what to do the next. When I talk to them, I suggest that the experiment with their server and try various techniques. I think they all should understand that for them, a development server is their production server and needs to pay proper attention to the code from the beginning. There should be NO any inappropriate code from the beginning. One has to fully focus and give their best, if they are not sure they should ask but should do something and stay active. Chapter 5 and 6 talks about two essential skills for any developer and database administration – what are the ethics of developers when they are working with production server and how to support software which is running on the production server. I have met many people who know the theory by heart but when put in front of keyboard they do not know where to start. The first thing they do opening the browser and searching online, instead of opening SQL Server Management Studio. This can very well happen to anybody who is experienced as well. Chapter 5 and 6 addresses that situation as well includes the handy scripts which can solve almost all the basic trouble shooting issues. “Where’s the Buffet?” By far, this is the best chapter in this book. If you have ever met me, you would know that I love food. I think after reading this chapter, I felt Thomas has written this just keeping me in mind. I think there will be many other people who feel the same way, too. Even my wife who read this chapter thought this was specifically written for me. I will not talk any more about this chapter as this is one must read chapter. And of course this is about real ‘FOOD‘. I am an SQL Server Trainer and Consultant and I totally agree with the point made in the chapter 8 of this book. Yes, it says here that what is necessary to train employees and people. Millions of dollars worth the labor is continuously done in the world which has faults and incorrect. Once something goes wrong, very expensive consultant comes in and fixes the problem. This whole cycle which can be stopped and improved if proper training is done. There is plenty of free trainings available as well, if one cannot afford paid training. “Connect. Learn. Share” – I think this is a great summary and bird’s eye view of this book. Networking is the key. Everything which is discussed in this book can be taken to next level if one properly uses this tips and continuously grow with it. Connecting with others, helping learn each other and building the good knowledge sharing environment should be the goal of everyone. Before I end the review I want to share a real experience. I have personally met one DBA who has worked in a single department in a company for so long that when he was put in a different department in his company due to closing that department, he could not adjust and quit the job despite the same people and company around him. Adjusting in the new environment gets much tougher as one person gets more and more experienced. This book precisely addresses the same issue along with their solutions. I just cannot stop comparing the book with my personal journey. I found so many things which are coincidently in the book is written as how we developer and DBA think. I must express special thanks to Thomas for taking time in his personal life and write this book for us. This book is indeed a book for everybody who wants to grow healthy in the tough and competitive environment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Reading EventLog C# Errors

    - by Robert
    I have this code in my ASP.NET application written in C# that is trying to read the eventlog, but it returns an error. EventLog aLog = new EventLog(); aLog.Log = "Application"; aLog.MachineName = "."; // Local machine foreach (EventLogEntry entry in aLog.Entries) { if (entry.Source.Equals("tvNZB")) Label_log.Text += "<p>" + entry.Message; } One of the entries it returns is "The description for Event ID '0' in Source 'tvNZB' cannot be found. The local computer may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'Service started successfully.'" I only want the 'Service started successfully'. Any ideas?

    Read the article

  • IOException reading a large file from a UNC path into a byte array using .NET

    - by Matt
    I am using the following code to attempt to read a large file (280Mb) into a byte array from a UNC path public void ReadWholeArray(string fileName, byte[] data) { int offset = 0; int remaining = data.Length; log.Debug("ReadWholeArray"); FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read); while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; } } This is blowing up with the following error. System.IO.IOException: Insufficient system resources exist to complete the requested If I run this using a local path it works fine, in my test case the UNC path is actually pointing to the local box. Any thoughts what is going on here ?

    Read the article

  • Rsync on Windows - Socket operation on non-socket

    - by TLS
    I get the following error when trying to run the latest Cygwin version of rsync in Windows XP SP2. The error occurs for attempts at both local syncs (that is: source and destination on the local harddisk only) and remote syncs (using "-e ssh" from the openssh package). Any advice on how to fix/workaround it? bash-3.2$ rsync -a dir1 dir2 rsync: Failed to dup/close: Socket operation on non-socket (108) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/pipe.c(143) [receiver=2.6.9] rsync: read error: Connection reset by peer (104) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/io.c(604) [sender=2.6.9]

    Read the article

  • Android - phone number contact format

    - by Daniel Benedykt
    Hi In Android I can get phone numbers of all the contacts without any problem. Tha problem is that for most users some numbers are stored as 'local' numbers, meaning that they dont have the country code included. For example, if the user lives in US and he has 2 contacts: 1) John - 555-123-1234 (local) (starting 1 not showing) 2) Jane - 44-123456787 (england phone number) The question is: How do I get all the numbers in an international format, when some of the numbers doesnt include the country code? Any way to figure that out? Thanks

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >