Search Results

Search found 60836 results on 2434 pages for 'system io directory'.

Page 1626/2434 | < Previous Page | 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633  | Next Page >

  • JUnit: 4.8.1 "Could not find class"

    - by Patrick
    Ok, I am like other and new to jUnit and having a difficult time trying to get it working. I have searched the forum but the answers provided; I am just not getting. If anyone out there could lend me a hand I would greatly appreciate it. Let me provide the basics: OS: mac OS X.6 export JUNIT_HOME="/Developer/junit/junit4.8.1" export CVSROOT="/opt/cvsroot" export PATH="/usr/local/bin:/usr/local/sbin:/usr/localmysql/bin:/opt/PalmSDK/Current/bin/:/usr/local/mysql/bin:$PATH:$JUNIT_HOME:$CVSROOT" export CLASSPATH="$CLASSPATH:$JUNIT_HOME/junit-4.8.1.jar:$JUNIT_HOME" I can compile a test class from a java file, however when I try to then run the test java org.junit.runner.JUnitCore MyTest.class I get the following: JUnit version 4.8.1 Could not find class: MyTest.class Time: 0.001 OK (0 tests) Now I have been in the directory with the MyTest.class which is just somewhere in my file system, I tried moving the source folder to the "junit" folder and the "junit/junit4.8.1" folder and the same result. I cannot even run the tests that came with junit. Thanks patrick

    Read the article

  • Grunt usemin with templates

    - by gang
    Given the following directory structure: – Gruntfile.js – app |– index.php |– js |– css |– templates |– template.php – dist How can I configure grunt usemin to update the references to styles and scripts in my template file relative to the index.php which uses the template? currently the tasks look like this: useminPrepare: { html: '<%= yeoman.app %>/templates/template.php', options: { dest: '<%= yeoman.dist %>' } }, usemin: { html: ['<%= yeoman.dist %>/{,*/}*.php'], css: ['<%= yeoman.dist %>/css/*.css'], options: { dirs: ['<%= yeoman.dist %>'] } } And the blocks inside of the template look like this: <!-- build:js js/main.js --> <script src="js/script1.js"></script> <script src="js/script2.js"></script> <!-- endbuild -->

    Read the article

  • Problem installing RVM...

    - by Cody
    I have executed the commands as prescribed in the instructions at the rvm website but things don't seem to work.. Fetching the code from the git repository runs smoothly but when I try to use rvm notes Error: /usr/local/bin/rvm: line 73: /home/cody/.rvm/scripts/rvm: No such file or directory flashes in multiple lines and doesn't stop till I hit ctrl+C.. I am running Ubuntu 8.04 and currently I am running ruby 1.9.2.. Sorry, if I am missing out any necessary information. Thanks in advance.

    Read the article

  • Secondary Domain Adds Extra Folder in URL during Postbacks

    - by Joshua
    My ASP.NET Website (C#, 3.5 framework, IIS7) is hosted at GoDaddy. There are multiple sites on the account. Currently when I perform postbacks or Response.Redirects on a secondary web site, the following URL appears in the address bar: www.mywebsite.com/webfolder/default.aspx Where the "webfolder" is the sub-directory on the server where the web site is hosted (i.e. SeverRoot/webfolder). The site seems to work with or without the folder in the URL. Is there a way to remove the folder from the URLs during postback? I think I have to use URL Rewriting (which GoDaddy supports using Microsoft's Rewrite Module) but I'm not sure how.

    Read the article

  • An important question on iPhone file writing

    - by Kyle
    I use the NSHomeDirectory() function to get the app's home folder, and write to the Documents directory within that. I'm curious, though, what happens when the user downloads an update for the app in the appstore? Will it all be deleted? When I delete the app on the device, then reinstall it, its wiped out. So, I'm curious to know what will happen with an update. I can't find this in the documentation at all. Thanks alot for reading. I really tried to find this asked somewhere else first, but couldn't. Hopefully this page will be informative to guys like me who are confused on the subject.

    Read the article

  • Windows Server 2003 Hacked - Files Being Uploaded

    - by jreedinc
    Blank directories are being created on my Windows Server 2003 virtual server with sub directories that are weird (for example: "88ÿ ÿ ÿÿþþ þþ13þ"). It looks like they are uploading bootlegged DVDs and pirated software. All of my bandwidth and file space is being eaten up. Could this be a shared permissions issue? Where should I look to further investigate this? My security permissions for the directory that is being hit are as followed: Administrators - ALL GRANTED IIS_WPG - Read & Execute, List Folder Contents, Read Internet Guest - DENY SYSTEM - ALL GRANTED Users - Read & Execute, List Folder Contents, Read My Event Viewer is showing many Logon/Logoff with NO IP?

    Read the article

  • Installing unsigned x64 driver to work with libusbdotnet

    - by user216194
    Hi all- I am currently in a Windows 7 dev. environment working to get a device to initialize with libusbdotnet. The device (a USB mass storage device) connects and runs using the default USB-MASS Storage driver for Windows. I want to replace this driver with the one created by the .INF Wizard in libusbdotnet. The operating system is a 64-bit, and by default the INF Wizard produces this driver, but I am unable to selected it because it is "unsigned" I believed, when I go to "Pick from a list of drivers" and point to the directory where the newly created device drivers are. I have enabled "TEST MODE" using DESO but I'm still unable to select this file. Anyone familiar with libusbdotnet, or directing devices to work with a specific driver that is unsigned in Window (do I need the .inf file? or the .sys???) do you have any advice about where I'm going wrong? Thanks!

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Image Uploading - security issues

    - by TenaciousImpy
    Hi, I'm developing an ASP.NET Web app and would like the user to be able to either upload an image from their local system, or pass in a URL to an image. The image can either be a JPG or PNG. What security issues should I be concerned about doing this? I've seen various ways of embedding code within JPG files. Are there any methods in C# (or external libraries) which can confirm that a file is a JPG/PNG, otherwise throw an error? At the very least, I'm making the directory which holds uploaded images non-browsable and putting a max size limit of 1mb, but I'd like to implement further checks. Thanks for any advice.

    Read the article

  • bash script problem, find , mv tilde files created by gedit

    - by Ke
    Hi, im using linux with gedit which has the wonderful habit of creating a temp file with a tilde at the end for every file I edit. im trying to move all of these files at once to a different folder using the following: find . -iname “*.php~” -exec mv {} /mydir \; However, its now giving me syntax errors, as if it were searching through each file and trying to move the piece of text. I just want to move all of the files ending in .php~ to another directory. Any idea how I do that? Cheers Ke

    Read the article

  • Accessing py2exe program over network in Windows 98 throws ImportErrors

    - by darvids0n
    I'm running a py2exe-compiled python program from one server machine on a number of client machines (mapped to a network drive on every machine, say W:). For Windows XP and later machines, have so far had zero problems with Python picking up W:\python23.dll (yes, I'm using Python 2.3.5 for W98 compatibility and all that). It will then use W:\zlib.pyd to decompress W:\library.zip containing all the .pyc files like os and such, which are then imported and the program runs no problems. The issue I'm getting is on some Windows 98 SE machines (note: SOME Windows 98 SE machines, others seem to work with no apparent issues). What happens is, the program runs from W:, the W:\python23.dll is, I assume, found (since I'm getting Python ImportErrors, we'd need to be able to execute a Python import statement), but a couple of things don't work: 1) If W:\library.zip contains the only copy of the .pyc files, I get ZipImportError: can't decompress data; zlib not available (nonsense, considering W:\zlib.pyd IS available and works fine with the XP and higher machines on the same network). 2) If the .pyc files are actually bundled INSIDE the python exe by py2exe, OR put in the same directory as the .exe, OR put into a named subdirectory which is then set as part of the PYTHONPATH variable (e.g W:\pylib), I get ImportError: no module named os (os is the first module imported, before sys and anything else). Come to think of it, sys.path wouldn't be available to search if os was imported before it maybe? I'll try switching the order of those imports but my question still stands: Why is this a sporadic issue, working on some networks but not on others? And how would I force Python to find the files that are bundled inside the very executable I run? I have immediate access to the working Windows 98 SE machine, but I only get access to the non-working one (a customer of mine) every morning before their store opens. Thanks in advance! EDIT: Okay, big step forward. After debugging with PY2EXE_VERBOSE, the problem occurring on the specific W98SE machine is that it's not using the right path syntax when looking for imports. Firstly, it doesn't seem to read the PYTHONPATH environment variable (there may be a py2exe-specific one I'm not aware of, like PY2EXE_VERBOSE). Secondly, it only looks in one place before giving up (if the files are bundled inside the EXE, it looks there. If not, it looks in library.zip). EDIT 2: In fact, according to this, there is a difference between the sys.path in the Python interpreter and that of Py2exe executables. Specifically, sys.path contains only a single entry: the full pathname of the shared code archive. Blah. No fallbacks? Not even the current working directory? I'd try adding W:\ to PATH, but py2exe doesn't conform to any sort of standards for locating system libraries, so it won't work. Now for the interesting bit. The path it tries to load atexit, os, etc. from is: W:\\library.zip\<module>.<ext> Note the single slash after library.zip, but the double slash after the drive letter (someone correct me if this is intended and should work). It looks like if this is a string literal, then since the slash isn't doubled, it's read as an (invalid) escape sequence and the raw character is printed (giving W:\library.zipos.pyd, W:\library.zipos.dll, ... instead of with a slash); if it is NOT a string literal, the double slash might not be normpath'd automatically (as it should be) and so the double slash confuses the module loader. Like I said, I can't just set PYTHONPATH=W:\\library.zip\\ because it ignores that variable. It may be worth using sys.path.append at the start of my program but hard-coding module paths is an absolute LAST resort, especially since the problem occurs in ONE configuration of an outdated OS. Any ideas? I have one, which is to normpath the sys.path.. pity I need os for that. Another is to just append os.getenv('PATH') or os.getenv('PYTHONPATH') to sys.path... again, needing the os module. The site module also fails to initialise, so I can't use a .pth file. I also recently tried the following code at the start of the program: for pth in sys.path: fErr.write(pth) fErr.write(' to ') pth.replace('\\\\','\\') # Fix Windows 98 pathing issues fErr.write(pth) fErr.write('\n') But it can't load linecache.pyc, or anything else for that matter; it can't actually execute those commands from the looks of things. Is there any way to use built-in functionality which doesn't need linecache to modify the sys.path dynamically? Or am I reduced to hard-coding the correct sys.path?

    Read the article

  • Visual Studio's CSS Intellisense - Absolute paths & MasterPage files

    - by Overflew
    Hi there. I've noticed that to get CSS-Intellisense working in VS, the paths have to be relative - Is this the case? However, it seems <link href="/resources/test.css" [...] /> is far more practical than <link href="resources/test.css" [...] /> I'm including the CSS in the master page, and don't see much good in including it as a content block, just to get the relative paths correct for each directory depth. I've had a quick try with inline code resolving the path, but no dice there either (for Intellisense). I feel I'm missing something fairly simple - What's the correct approach here to have CSS Intellisense work across the pages in the app during dev, and render fine in any deployed state? Cheers. (Note - I'm aware that a <% if (false) { % type hack is required for user controls)

    Read the article

  • Trouble setting up php Zend include path

    - by behrk2
    Hello, I am trying to set up a PHP path to my Zend Framework. I am very confused on how to do this. My Zend Framework is located at the following location on my server: amazon/ZendFramework-1.10.3-minimal I am going to be creating a couple of php files in the amazon/ directory that will require the Zend Framework. My include path is: include("ZendFramework-1.10.3-minimal/library/Zend/Service/Amazon.php"); This works, however inside of Amazon.php is the line require_once 'Zend/Rest/Client.php'; ...and then Client.php has more dependencies set up like that, and so on. How can I set up my include path so that Amazon.php and Client.php (and so on) can correctly reference the location of the Zend Framework? Thanks

    Read the article

  • Multi-level clones with Git?

    - by Chad Johnson
    So, I'm thinking of having the following centralized setup with Git (each of these are clones): stable development developer1 developer2 developer3 So, I created my stable repository git --bare init made the 'development' clone git clone ssh://host.name//path/to/stable/project.git development and made a 'developer' clone git clone ssh://host.name//path/to/development/project.git developer So, now, I make a change, commit, and then I push from my developer account git commit --all git push and the change goes to the development clone. But now, when I ssh to the server, go to the development clone directory, and run "git fetch" or "get pull", I don't see the changes. So what do I do? Am I totally misunderstanding things and doing things wrong? How can I see the changes in the 'development' clone that I pushed from my 'developer' clone? This worked fine in Mercurial.

    Read the article

  • how to maintain a Presistent state of an app in Android?

    - by androidbase Praveen
    hi all, i am working on my App. in between i pressed the Home button on the device. my app will go to the background tasks. After that i long press the home button it will show my app as a persistent state. i.e where and what i had done in my app. But i click my app in the directory window it restarts my app. i want to do if my app is in the background tasks it will wake up else it will start. how to achieve that? Any Idea?

    Read the article

  • asp.net mvc 2 web application inside a Web site?

    - by Amitabh
    I have a Asp.Net Web Site deployed as a WebSite inside IIS 7.5. http://localhost/WebSite Then I have a second Asp.Net MVC 2 web application which is deployed as Sub Application inside the above WebSite. So the mvc aplication should work on the following Url. http://localhost/WebSite/MvcApp/ The web site works fine but when I browse the mvc Url http://localhost/WebSite/MvcApp/ It gives following error. HTTP Error 403.14 - Forbidden The Web server is configured to not list the contents of this directory.

    Read the article

  • Whats wrong with my backgroundwork method

    - by diver-d
    I am trying to get a background worker process working in a wpf application. it creates 2 files then crashes. BackgroundWorker worker = new BackgroundWorker(); worker.DoWork += delegate(object s, DoWorkEventArgs args) { CreateFile(i.ToString()); }; worker.RunWorkerAsync(); private void CreateFile(string fileName) { string path = string.Format(@"{0}\{1}.txt", directory, fileName); using (StreamWriter sw = new StreamWriter(path)) { sw.WriteLine(fileName); } } I get this error " The requested operation cannot be performed on a file with a user-mapped section open." what am I doing wrong? Any help would be great

    Read the article

  • managing library dependencies with Boost.Build and C++

    - by user931794
    I want to develop a project which can be built on a bunch of different platforms. The project code will be in C++, what's the the best way to manage libraries? I plan on using bjam as the build system because I'm going to be depending on Boost and their unit testing framework as well. The two dependent libraries are Boost itself and FLTK. The possibilities that come to mind for library management are: include build artifacts (binaries) and headers for all supported platforms in-tree include complete source for all dependent libraries in-tree, and somehow script them as dependencies A combination of 1 and 2, like node.js does with v8 inform the user that they need to build the libraries themselves and then have them on the PATH or in some special directory, like libcurl does with its dependencies What is the best approach here? The project will probably not grow beyond a few thousand lines over the next six months, but I want to make the right choice here so that I don't have to come back and switch everything around later.

    Read the article

  • What user runs the git hook?

    - by Jasie
    I have a post-update hook on my server, such that when I git push it does a pull on the live web directory. However, while the push always succeeds, the post-update hook sometimes fails. The hook is pretty simple: #!/bin/sh # # An example hook script to prepare a packed repository for use over # dumb transports. # # To enable this hook, rename this file to "post-update". cd /var/www env -i git pull I'm pushing updates from a variety of places, but sometimes I have to login as root on the server and manuall do a env -i git pull I only have to do it 20% of the time though. Any ideas why it would fail randomly? Thanks!

    Read the article

  • Reference WiX define made in included file.

    - by leiflundgren
    I have a defines.wxi-file which contains some good definitions used in all my wxs-files. When I attempt to reference the defined value I get Undefined preprocessor variable '$(var.MAGE_FOLDER)' back in my face. I guess there is something trivial I am missing here... Any ideas? defines.wxi <Include> <?define IMAGE_FOLDER="Images" ?> </Include> Product.wxs <Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <?Include defines.wxi ?> <Product ... > <Component Id='c.Images' Directory='$(var.IMAGE_FOLDER)' />

    Read the article

  • How can I get elements out of an array with Template Toolkit?

    - by Przemek
    I have an array of Paths which i want to read out with Template Toolkit. How can I access the array Elements of this array? The Situation is this: my @dirs; opendir(DIR,'./directory/') || die $!; @dirs = readdir(DIR); close DIR; $vars->{'Tree'} = @dirs; Then I call the Template Page like this: $template->process('create.tmpl', $vars) || die "Template process failed: ", $template->error(), "\n"; In this template I want to make an Tree of the directories in the array. How can I access them? My idea was to start with a foreach in the template like this [% FOREACH dir IN Tree.dirs %] $dir [% END %]

    Read the article

  • Indy 10 FTP empty list

    - by Lobuno
    Hello! I have been receiving reports from some of my users that, when using idFTP.List() from some servers (MS FTP) then the listing is received as empty (no files) when in reality there are (non-hidden) files on the current directory. May this be a case of a missing parser? The funny think, when I use the program to get the list from MY server (MSFTP on W2003) everything seems OK but on some servers I've been hitting this problem. Using latest Indy10 on D2010. Any idea?

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • Blank space after file extension -> weird FileInfo behaviour

    - by Axarydax
    Somehow a file has appeared in one of my directories, and it has space at the end of its extension - its name is "test.txt ". The weird thing is that Directory.GetFiles() returns me the path of this file, but I'm unable to retrieve file information with FileInfo class. The error manifests here: DirectoryInfo di = new DirectoryInfo("c:\\somedir"); FileInfo fi = di.GetFileSystemInfos("test*")[0] as FileInfo; //correctly fi.FullName is "c:\somedir\test.txt " //but fi.Exists==false (!) Is FileInfo class broken? Can I somehow retrieve information about this file? I really don't know how did that file appear on my file system, and I am unable to recreate some more of them. All of my attempts to create a new file with this type of extension have failed, but now my program is crashing when encoutering it. I can easily handle the exception when finding the file, but boy am I curious about this!

    Read the article

  • A better way of switching between Android source versions

    - by dan
    I would like to be able to switch between various android releases (1.0, 1.5, 2.0, etc.) and then access them via the file system to copy all files for that version into a tarball. Currently I am just running repo init -u <source URL> -b release-1. to get each version (changing the tag for each version I need). If this was a single git, I could check out the branch/tag I needed and the prject directory would "morph" to reflect then and I could just tar that folder. since the android source is split into multiple git repositories controlled by repo I have not yet found a way to change this other then the method mentioned above. any suggestions are appreciated.

    Read the article

< Previous Page | 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633  | Next Page >