Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 451/1338 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Looping over commits for a file with jGit

    - by Andy Jarrett
    I've managed to get to grips with the basics of jGit file in terms of connecting to a repos and adding, commiting, and even looping of the commit messages for the files. File gitDir = new File("/Users/myname/Sites/helloworld/.git"); RepositoryBuilder builder = new RepositoryBuilder(); Repository repository; repository = builder.setGitDir(gitDir).readEnvironment() .findGitDir().build(); Git git = new Git(repository); RevWalk walk = new RevWalk(repository); RevCommit commit = null; // Add all files // AddCommand add = git.add(); // add.addFilepattern(".").call(); // Commit them // CommitCommand commit = git.commit(); // commit.setMessage("Commiting from java").call(); Iterable<RevCommit> logs = git.log().call(); Iterator<RevCommit> i = logs.iterator(); while (i.hasNext()) { commit = walk.parseCommit( i.next() ); System.out.println( commit.getFullMessage() ); } What I want to do next is be able to get all the commit message for a single file and then be able revert the single file back to a specific reference/point in time.

    Read the article

  • Actual High Speed USB flash drive

    - by CSkau
    I'm looking to upgrade my EEE 1000H by possibly replacing the HDD with simple (internal) usb connected storage. The problem I'm having now is that I can't seem to find any actual high speed usb sticks. They all proclaim high speeds, but usually turn out to be ~30 mb/s - much lower than the 60 mb/s (480 mbit/s / 8 ) I understand USB 2.0 is at - no ? Can anyone enlighten me as to why no USB sticks seem to go past that low bar or alternatively point me in the direction of some actual high speed usb sticks ? Any help is greatly appreciated :) Cheers!

    Read the article

  • Cannot access NSDictionary

    - by michael blaize
    I created a JSON using a PHP script. I am reading the JSON and can see that the data has been correctly read. However, when it comes to access the objects I get unrecognized selector sent to instance... Cannot seem to find why that is after too many hours !!!! Any help would be great ! My code looks like that: `NSDictionary *json = [[NSDictionary alloc] init]; json = [NSJSONSerialization JSONObjectWithData:receivedData options:kNilOptions error:&error]; NSLog(@"raw json = %@,%@",json,error); NSMutableArray *name = [[NSMutableArray alloc] init]; [name addObjectsFromArray: [json objectForKey:@"name"]];` The code crashes when reaching the last line above. The output like this: raw json = ( { category = vacancies; link = "http://blablabla.com"; name = "name 111111"; tagline = "tagline 111111"; }, { category = vacancies; link = "http://blobloblo.com"; name = "name 222222222"; tagline = "tagline 222222222"; } ),(null) 2012-06-23 21:46:57.539 Wind expert[4302:15203] -[__NSCFArray objectForKey:]: unrecognized selector sent to instance 0xdcfb970 HELP !!!

    Read the article

  • MS Access ADP front end and SQL Server back end for field data collection?

    - by Brash Equilibrium
    I am an anthropologist. I am going to the field and will use a netbook to collect survey data. The survey forms will need to allow me to enter data into multiple tables, search tables, allow subforms, and be fast enough to not slow down my interview. I have considered storing the data in a SQL Server Express 2008 R2 server (there will be a lot of data) while using a Microsoft Access data project as a front end. To cut down the number of steps required to collect and store data, I'm considering using the netbook for both data storage and collection (after reading this article about SQL Server on a netbook). My questions are: (1) Is there a simpler solution that is also gratis (gratis because I already have a MS Access license from my workplace, and SQL Server Express is, obviously, free)? (2) Does my idea to store and collect data using the netbook make sense? Thank you.

    Read the article

  • Need opinions on LaTeX and ever upgrading

    - by yCalleecharan
    Hi, I've been using LaTeX since 2005 with the TeXLive distribution and I've been upgrading as each new TeXLive distribution comes out. In the recent years I noticed an increase in new packages, updated packages and in one instance a new package bearing a different name replacing an old one by the same package author. A LaTeX document which relies heavily on packages and which has been produced a few years back may start to get some warnings and error messages on present-day LaTeX compilation. The primary reason I switched to LaTeX is because of its reliability and robustness to create big documents easily, not to mention the adorable typographic quality. With LaTeX one doesn't have to worry about how to open a docx in an old program supporting only doc for instance. Now, when there are so much continual changes in the packages in a LaTeX distribution, I tend to wonder when will this madness end. Not that having enhanced and new features are bad in packages, but not all updated packages are backward compatible. Eventually one would like to be able to compile a LaTeX file in 10 years time that he/she is working on at present and not get any compilation warnings/error messages due to some unpredictable behavior of updated packages or due to a package that has been cast-off from a LaTeX distribution. If I understand correctly CTAN do keep a database with all packages from different versions. I would like to know how you LaTeX users handle this issue. Thanks a lot...

    Read the article

  • SQLAlchemy, one to many vs many to one

    - by sadvaw
    Dear Everyone, I have the following data: CREATE TABLE `groups` ( `bookID` INT NOT NULL, `groupID` INT NOT NULL, PRIMARY KEY(`bookID`), KEY( `groupID`) ); and a book table which basically has books( bookID, name, ... ), but WITHOUT groupID. There is no way for me to determine what the groupID is at the time of the insert for books. I want to do this in sqlalchemy. Hence I tried mapping Book to the books joined with groups on book.bookID=groups.bookID. I made the following: tb_groups = Table( 'groups', metadata, Column('bookID', Integer, ForeignKey('books.bookID'), primary_key=True ), Column('groupID', Integer), ) tb_books = Table( 'books', metadata, Column('bookID', Integer, primary_key=True), tb_joinedBookGroup = sql.join( tb_books, tb_groups, \ tb_books.c.bookID == tb_groups.c.bookID) and defined the following mapper: mapper( Group, tb_groups, properties={ 'books': relation(Book, backref='group') }) mapper( Book, tb_joinedBookGroup ) ... However, when I execute this piece of code, I realized that each book object has a field groups, which is a list, and each group object has books field which is a singular assigment. I think my definition here must have been causing sqlalchemy to be confused about the many-to-one vs one-to-many relationship. Can someone help me sort this out? My desired goal is g.books = [b, b, b, .. ] book.group = g, where g is an instance of group, and b is an instance of book

    Read the article

  • Options to Reformat USB Drive

    - by user8783
    I have a 64GB usb thumb drive. When I plug it into a Windows system (I have tried several machines with both Windows 7 and 8), the system attempts to read it and never completes. The drive does not show up in the device manager, nor does it appear in Computer Management - Disk Management. If I click on the "Safely Remove Hardware" icon, however, I do get an option for "Eject Mass Storage Media", although it never accomplishes anything. Also, as long as the USB drive is connected, I cannot access other USB drives or even restart my computer. It seems as though it locks up Windows Explorer. Is there any option to reformat this drive? It was a great drive up until all this craziness began and I would hate to throw it out because it was rather pricey.

    Read the article

  • using yield in C# like I would in Ruby

    - by Sarah Vessels
    Besides just using yield for iterators in Ruby, I also use it to pass control briefly back to the caller before resuming control in the called method. What I want to do in C# is similar. In a test class, I want to get a connection instance, create another variable instance that uses that connection, then pass the variable to the calling method so it can be fiddled with. I then want control to return to the called method so that the connection can be disposed. I guess I'm wanting a block/closure like in Ruby. Here's the general idea: private static MyThing getThing() { using (var connection = new Connection()) { yield return new MyThing(connection); } } [TestMethod] public void MyTest1() { // call getThing(), use yielded MyThing, control returns to getThing() // for disposal } [TestMethod] public void MyTest2() { // call getThing(), use yielded MyThing, control returns to getThing() // for disposal } ... This doesn't work in C#; ReSharper tells me that the body of getThing cannot be an iterator block because MyThing is not an iterator interface type. That's definitely true, but I don't want to iterate through some list. I'm guessing I shouldn't use yield if I'm not working with iterators. Any idea how I can achieve this block/closure thing in C# so I don't have to wrap my code in MyTest1, MyTest2, ... with the code in getThing()'s body?

    Read the article

  • Emulate a USB port as a USB flash drive?

    - by Wilco
    Does anyone know of any software that can emulate a USB flash drive through an available USB port in OS X? Perhaps some way to map a directory to a USB port that could then be connected to another device that supports reading USB storage devices? I'd love to connect my laptop to my car's USB port and access files as if it were a USB drive. I know about the target disk mode with firewire (not sure if this is also supported over USB), but I was hoping for something that doesn't require booting outside of the OS (I want to retain use of the machine). Any ideas?

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • Tree-like queues

    - by Rehno Lindeque
    I'm implementing a interpreter-like project for which I need a strange little scheduling queue. Since I'd like to try and avoid wheel-reinvention I was hoping someone could give me references to a similar structure or existing work. I know I can simply instantiate multiple queues as I go along, I'm just looking for some perspective by other people who might have better ideas than me ;) I envision that it might work something like this: The structure is a tree with a single root. You get a kind of "insert_iterator" to the root and then push elements onto it (e.g. a and b in the example below). However, at any point you can also split the iterator into multiple iterators, effectively creating branches. The branches cannot merge into a single queue again, but you can start popping elements from the front of the queue (again, using a kind of "visitor_iterator") until empty branches can be discarded (at your discretion). x -> y -> z a -> b -> { g -> h -> i -> j } f -> b Any ideas? Seems like a relatively simple structure to implement myself using a pool of circular buffers but I'm following the "think first, code later" strategy :) Thanks

    Read the article

  • Multiple exports with MEF does some really heinous stuff -- why, and why is it allowed?

    - by Dave
    I have an interesting situation where I need to do something like this: [Export[typeof(ICandy1)] [Export[typeof(ICandy2)] public class Candy : ICandy2 { ... } where public interface ICandy1 { ... } public interface ICandy2 : ICandy1 { ... } I couldn't find any posts anywhere regarding using multiple [Export] attributes, so I figured, what the hell, might as well try it. At first glance, it actually seemed to work. I have a couple of methods that call into both interfaces of a Candy instance, and it was fine. However, as I started to test the app, I saw that the behavior wasn't right, and when looking at the Output window, I saw that I was getting tons of COMExceptions. I couldn't track down where they were all coming from, but they always occurred when a worker thread was sleeping. I figured that it had to be from the main thread, then, but didn't know how to debug this at all. Nothing should have been going on in the GUI, and I disabled my DispatchTimers just in case -- same thing. Even more strange than the COMExceptions was the really, really erratic behavior when stepping through code. About 30% of the time, when I single stepped, it would pop out of the method, or it would single step over two lines of code! Totally weird stuff that I am not used to seeing. The only thing that changed between working and non-working code was the introduction of MEF through my plugin loading code. So as a test, I changed my plugin assembly to only export one interface, and I hardcoded everything in the app that relied on the other (now not-implemented) interface. And now the COMExceptions are gone, and the weird debugging behavior is gone. Is this something people here have seen before? If MEF is not expected to allow a class to Export multiple interfaces, then shouldn't a CompositionException get raised when composing the parts? Can anyone explain why MEF would cause these weird problems???

    Read the article

  • Facebook Tagging friends to the picture

    - by Rajesh Dante
    Below code tag only first uid then then its shows Fatal error: Uncaught OAuthException: (#100) Invalid parameter and can i use exact location for tagging.. as in below code x and y values are in pixel $facebook = new Facebook ( array ( 'appId' => FBAPPID, 'secret' => FBSECRETID ) ); $facebook->setFileUploadSupport ( true ); if (isset ( $_POST ['image'] ) && isset ( $_POST ['tname'] )) { $path_to_image = encrypt::instance ()->decode ( $_POST ['image'] ); $tags = (array)encrypt::instance ()->decode ( $_POST ['tname'] ); /* * Output $tags = array ( 0 => '[{"tag_uid":"100001083191675","x":100,"y":100},{"tag_uid":"100001713817872","x":100,"y":230},{"tag_uid":"100000949945144","x":100,"y":360},{"tag_uid":"100001427144227","x":230,"y":100},{"tag_uid":"100000643504257","x":230,"y":230},{"tag_uid":"100001155130231","x":230,"y":360}]' ); */ $args = array ( 'message' => 'Von ', 'source' => '@' . $path_to_image, 'access_token' => $this->user->fbtoken ) ; $photo = $facebook->api ( $this->user->data->fbid . '/photos', 'post', $args ); // upload works but not tags if (is_array ( $photo ) && ! empty ( $photo ['id'] )) { echo 'Photo uploaded. Check it on Graph API Explorer. ID: ' . $photo ['id']; foreach ( $tags as $key => $t ) { $tagRe = json_encode ( $t ); $args = array ( 'tags' => $tagRe, 'access_token' => $this->user->fbtoken ); $facebook->api ( '/' . $photo ['id'] . '/tags', 'post', $args ); } } }

    Read the article

  • Advice on off-site backup of Hyper-V Failover Cluster

    - by Paul McCowat
    We are currently setting up a Server 2008 R2 which will be off-site over a leased line with VPN. At the main site is 2 x Hyper-V hosts in a failover cluster with PowerVault M3000i iSCSI SAN. We are using BackupAssist for local backups and each host backups up itself and it's guests nightly creating a 500GB backup each which is copied to a 2TB rotated NAS drive. Files and SQL DB's are also backed up / log shipped etc. Looking for the best way to backup the Hyper-V VM's and copy them off-site so that the OS's are only a month old and the data is a day old. The main backups are too large to transfer between backups so options discussed so far are: Take rotating individual backups of the VM's each day and copy over, Day 1 SQL VM, Day 2 Exchange VM etc, would require more storage. Look in to Hyper-V snapshots, however don't believe these are supported in clustering. 3rd party replication tools

    Read the article

  • How to transform a production to LL(1) grammar for a list separated by a semicolon?

    - by Subb
    Hi, I'm reading this introductory book on parsing (which is pretty good btw) and one of the exercice is to "build a parser for your favorite language." Since I don't want to die today, I thought I could do a parser for something relatively simple, ie a simplified CSS. Note: This book teach you how to right a LL(1) parser using the recursive-descent algorithm. So, as a sub-exercice, I am building the grammar from what I know of CSS. But I'm stuck on a production that I can't transform in LL(1) : //EBNF block = "{", declaration, {";", declaration}, [";"], "}" //BNF <block> =:: "{" <declaration> "}" <declaration> =:: <single-declaration> <opt-end> | <single-declaration> ";" <declaration> <opt-end> =:: "" | ";" This describe a CSS block. Valid block can have the form : { property : value } { property : value; } { property : value; property : value } { property : value; property : value; } ... The problem is with the optional ";" at the end, because it overlap with the starting character of {";", declaration}, so when my parser meet a semicolon in this context, it doesn't know what to do. The book talk about this problem, but in its example, the semicolon is obligatory, so the rule can be modified like this : block = "{", declaration, ";", {declaration, ";"}, "}" So, Is it possible to achieve what I'm trying to do using a LL(1) parser?

    Read the article

  • df-h command in ubuntu

    - by Esha Sharma
    I am a new user of Ubuntu. When I type df -h in terminal , it gives me list of all storage devices and space usage. In my system I get this. Filesystem Size Used Avail Use% Mounted on /cow 934M 173M 761M 19% / udev 925M 4.0K 925M 1% /dev tmpfs 374M 856K 373M 1% /run /dev/sdb1 7.5G 2.8G 4.8G 37% /cdrom /dev/loop0 1.5G 1.5G 0 100% /rofs tmpfs 934M 16K 934M 1% /tmp none 5.0M 0 5.0M 0% /run/lock none 934M 76K 934M 1% /run/shm /dev/sda 299G 74M 299G 1% /media/q I understand that /dev/sda is my hard drive which is 320 gb(in gib it is 299 and hopefully that is what is being displayed) and /dev/sdb1 is pendrive of 8gb from which I am running the live cd. My question is what are the other folders and what is the physical location of these folders if complete memory is taken by the device dev/sda?

    Read the article

  • Replacing text with apostrophe text via sed in applescript

    - by bob stinton
    I have an applescript to find and replace a number of strings. I ran in the problem of having a replacement string which contained & some time ago, but could get around it by putting \& in the replacement property list. However an apostrophe seems to be far more annoying. Using a single apostrophe just gets ignored (replacement doesn't contain it), using \' gives a syntax error (Expected “"” but found unknown token.) and using \' gets ignored again. (You can keep doing that btw, even number gets ignored uneven gets syntax error) I tried replacing the apostrophe in the actual sed command with double quotes (sed "s…" instead of sed 's…'), which works in the command line, but gives a syntax error in the script (Expected end of line, etc. but found identifier.) The single quotes mess with the shell, the double quotes with applescript. I also tried '\'' as was suggested here and '"'"' from here. Basic script to get the type of errors: set findList to "Thats.nice" set replaceList to "That's nice" set fileName to "Thats.nice.whatever" set resultFile to do shell script "echo " & fileName & " | sed 's/" & findList & "/" & replaceList & " /'"

    Read the article

  • How to use db4o IObjectContainer in a web application ? (Container lifetime ?)

    - by driis
    I am evaluating db4o for persistence for a ASP .NET MVC project. I am wondering how I should use the IObjectContainer in a web context with regards to object lifetime. As I see it, I can do one of the following: Create the IObjectContainer at application startup and keep the same instance for the entire application lifetime. Create one IObjectContainer per request. Start a server, and get a client IObjectContainer for each database interaction. What are the implications of these options, in terms of performance and concurrency ? Since the database is locked when an IObjectContainer is opened, I am pretty sure that option 2) would get me some problems with concurrency - would this also be the case for option 1 ? As I understand it, if I retrieve an object from an IObjectContainer, it must be saved by the same IObjectContainer instance - in order for db4o to identify it as being the same object. Therefore, If I choose option 3), I would have to retrieve the original object, make the necessary changes (copy data from a modified object), and then store it using the same IObjectContainer. Is this true ?

    Read the article

  • OOP C# Question: Making a Fruit a Pear

    - by Adam Kane
    Given that I have an instance of Fruit with some properties set, and I want to get those properties into a new Pear instance (because this particular Fruit happens to have the qualities of a pear), what's the best way to achieve this effect? For example, what we can't do is simple cast a Fruit to a Pear, because not all Fruits are Pears: public static class PearGenerator { public static Pear CreatePear () { // Make a new generic fruit. Fruit genericFruit = new Fruit(); // Upcast it to a pear. (Throws exception: Can't cast a Fruit to a Pear.) Pear pear = (Pear)genericFruit; // Return freshly grown pear. return ( pear ); } } public class Fruit { // some code } public class Pear : Fruit { public void PutInPie () { // some code } } Thanks! Update: I don't control the "new Fruit()" code. My starting point is that I've got a Fruit to work with. I need to get that Fruit into a new Pear somehow. Maybe copy all the properties one by one?

    Read the article

  • trouble calculating offset index into 3D array

    - by Derek
    Hello, I am writing a CUDA kernel to create a 3x3 covariance matrix for each location in the rows*cols main matrix. So that 3D matrix is rows*cols*9 in size, which i allocated in a single malloc accordingly. I need to access this in a single index value the 9 values of the 3x3 covariance matrix get their values set according to the appropriate row r and column c from some other 2D arrays. In other words - I need to calculate the appropriate index to access the 9 elements of the 3x3 covariance matrix, as well as the row and column offset of the 2D matrices that are inputs to the value, as well as the appropriate index for the storage array. i have tried to simplify it down to the following: //I am calling this kernel with 1D blocks who are 512 cols x 1row. TILE_WIDTH=512 int bx = blockIdx.x; int by = blockIdx.y; int tx = threadIdx.x; int ty = threadIdx.y; int r = by + ty; int c = bx*TILE_WIDTH + tx; int offset = r*cols+c; int ndx = r*cols*rows + c*cols; if((r < rows) && (c < cols)){ //this IF statement is trying to avoid the case where a threadblock went bigger than my original array..not sure if correct d_cov[ndx + 0] = otherArray[offset]; d_cov[ndx + 1] = otherArray[offset] d_cov[ndx + 2] = otherArray[offset] d_cov[ndx + 3] = otherArray[offset] d_cov[ndx + 4] = otherArray[offset] d_cov[ndx + 5] = otherArray[offset] d_cov[ndx + 6] = otherArray[offset] d_cov[ndx + 7] = otherArray[offset] d_cov[ndx + 8] = otherArray[offset] } When I check this array with the values calculated on the CPU, which loops over i=rows, j=cols, k = 1..9 The results do not match up. in other words d_cov[i*rows*cols + j*cols + k] != correctAnswer[i][j][k] Can anyone give me any tips on how to sovle this problem? Is it an indexing problem, or some other logic error?

    Read the article

  • Sudden slow read & write speed on all IO

    - by user23392
    I have a custom built rig that has 2 storage drives. for OS: Western Digital 1.0TB HARD DR 64MB for other stuff: Corsair Performance 3 128GB (SSD) [ expected read speed: 400 mb/s ] The system was incredibly fast for a couple of months, then one day i was playing a game then it started to get buggy (some sounds and objects disappearing), i stopped the game and the system seemed to be unstable so i had to shut it down, next morning i couldn't start it up, it was saying something about corrupt device. I formatted both disks and installed a fresh copy of windows, all i can say that since that day the system was never like before, it takes 10 minutes to boot up (the icons and desktop slowly appear). but once it's done the slowness isn't as noticeable. Here's my benchmark on the HDD ( read speed - write speed ): And the SSD: Anyone knows what could be the issue?

    Read the article

  • How do I get a USB hard-disk adaper to work with a PCMCIA USB card?

    - by Carl
    I have a PCMCIA (Cardbus) 32bit USB 2.0 2-port card. If I plug my MP3 player in it works fine. I have a USB adapter for a hard disk drive. When I plug that into the PCMCIA USB slot the light on that hard disk flashed between red and green - it should stay green. "USB Mass Storage device" is added in the Device Manager, but no drive letter appears in My Computer. The installing device icon down by the clock never goes away. (Windows XP SP3.)

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • Using Mercurial (hg), can you just "hg backout" all the commits you did for the files you don't want

    - by Jian Lin
    Using Mercurial (hg), can you just "hg backout" all the commits you did for the files you don't want to push, and then do a push? Because Mercurial (or Git) won't let us push a single file or a single folder to another repository, so I am thinking: 1) How about, we just look at the commit we did, and hg backout the ones we don't want to push. 2) hg out -v to see the list of files that will be pushed 3) now do the push by hg push Is this a good way? This is because I got the following advice: 1) Don't commit that file if you don't want it to be pushed (but sometimes even just for experimentation, I do want to keep the intermediate revisions) (-- maybe I can hg commit and hg backout right away to prevent it from being pushed.) 2) Some people told me just to hg clone tmp from that repository i want to push to, and then copy the local file over to this tmp working directory, hg commit to this tmp repository, and then do a push. But I found that the hg clone tmp will take up 400MB of new data and files, and make the hard drive work very hard, just to push 1 file? So I would rather not use this method.

    Read the article

  • Routing traffic to a specific NIC in Windows

    - by Stoicpoet
    I added a 10GB NIC to a SQL server which is connected over to a backend storage using ISCSI. I would like to force traffic going to a certain IP address/host to use the 10gb NIC, while all other traffic should continue to use the 1GB NIC. The 10gb nic is configured using a private network. So far I have added a entry in the host file to the host I want to go over the private network and when I ping the host, it does return the private IP, but I'm still finding traffic going to the 1gb pipe. How can I force all traffic to this host to use the 10gb interface? Would the best approach be a static route? 160.205.2.3 is the IP to the 1gb host, I actually want to the traffic to route over an interface assigned 172.31.3.2, which is also defined as Interface 22. That said, would this work? route add 160.205.2.3 mask 255.255.255.255 172.31.3.2 if 22

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >