Search Results

Search found 50510 results on 2021 pages for 'static files'.

Page 556/2021 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • How to make a custom ear file in maven

    - by Zombies
    Here is my challenge, I need to make an ear file for a specific container. To be more specific on how this ear will be created: This is a standard j2ee ear file, with 1 WAR in it. The container it is deployed to will expect certain xml files (which can easily be found (somewhere) inside the source project). Here are my obstacles The source folder contains various container specific xml files. But, these files do not map directly to where the container expects them inside the EAR file. For example, there will be a file that this container expects to be in 'EARFILE.ear/config/connections.xml'. But this file is located (in the source) at /some/obscure/unrelated/directory. This is the case for about 5-7 files. I cannot change the original source project layout at all. So, how can I create the compliant EAR file that I need. There is NO plugin at this time for the container that I am using, I have certainly looked.

    Read the article

  • Javascript (using jQuery) in a large Project... organization, passing data, private method, etc.

    - by gaoshan88
    I'm working on a large project that is organized like so: Multiple javascript files are included as needed and most code is wrapped in anonymous functions... // wutang.js //Included Files Go Here // Public stuff var MethodMan; // Private stuff (function() { var someVar1; MethodMan = function(){...}; var APrivateMethod = function(){...}; $(function(){ //jquery page load stuff here $('#meh').click(APrivateMethod); }); })(); I'm wondering about a few things here. Assuming that there are a number of these files included on a page, what has access to what and what is the best way to pass data between the files? For example, I assume that MethodMan() can be accessed by anything on any included page. It is public, yes? var someVar1 is only accessible to the methods inside that particular anonymous function and nowhere else, yes? Also true of var APrivateMethod(), yes? What if I want APrivateMethod() to make something available to some other method in a different anonymous wrapped method in a different included page. What's the best way to pass data between these private, anonymous functions on different included pages? Do I simply have to make whatever I want to use between them public? How about if I want to minimize global variables? What about the following: var PootyTang = function(){ someVar1 = $('#someid').text(); //some stuff }; and in another included file used by that same page I have: var TangyPoot = function(){ someVar1 = $('#someid').text(); //some completely different stuff }; What's the best way to share the value of someVar1 across these anonymous (they are wrapped as the first example) functions?

    Read the article

  • Stop writing blank line at the end of CSV file (using MATLAB)

    - by Grant M.
    Hello all ... I'm using MATLAB to open a batch of CSV files containing column headers and data (using the 'importdata' function), then I manipulate the data a bit and write the headers and data to new CSV files using the 'dlmwrite' function. I'm using the '-append' and 'newline' attributes of 'dlmwrite' to add each line of text/data on a new line. Each of my new CSV files has a blank line at the end, whereas this blank line was not there before when I read in the data ... and I'm not using 'newline' on my final call of 'dlmwrite'. Does anyone know how I can keep from writing this blank line to the end of my CSV files? Thanks for your help, Grant EDITED 5/18/10 1:35PM CST - Added information about code and text file per request ... you'll notice after performing the procedure below that there appears to be a carriage return at the end of the last line in the new text file. Consider a text file named 'textfile.txt' that looks like this: Column1, Column2, Column3, Column4, Column 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 Here's a sample of the code I am using: % import data importedData = importdata('textfile.txt'); % manipulate data importedData.data(:,1) = 100; % store column headers into single comma-delimited % character array (for easy writing later) columnHeaders = importedData.textdata{1}; for counter = 2:size(importedData.textdata,2) columnHeaders = horzcat(columnHeaders,',',importedData.textdata{counter}); end % write column headers to new file dlmwrite('textfile_updated.txt',columnHeaders,'Delimiter','','newline','pc') % append all but last line of data to new file for dataCounter = 1:(size(importedData.data,2)-1) dlmwrite('textfile_updated.txt',importedData.data(dataCounter,:),'Delimiter',',','newline','pc','-append') end % append last line of data to new file, not % creating new line at end dlmwrite('textfile_updated.txt',importedData.data(end,:),'Delimiter',',','-append')

    Read the article

  • what patern is layerd architechture in asp.net ?

    - by haansi
    Hi, I am a asp.net developer and don't know much about patterns and architecture. I will very thankful if you can please guide me here. In my web applications I use 4 layers. Web site project (having web forms + code behind cs files, user controls + code behind cs files, master pages + code behind cs files) CustomTypesLayer a class library (having custom types, enumerations, DTOs, constructers, get, set and validations) BusinessLogicLayer a class library (having all business logic, rules and all calls to DAL functions) DataAccessLayer a class library( having just classes communicating to database.) -My user interface just calls BusinessLogicLayer. BusinessLogicLayer do proecessign in it self and for data it calls DataAccessLayer funtions. -Web forms do not calls directly DAL. -CustomTypesLayer is shared by all layers. Please guide me is this approach a pattern ? I though it may be MVC or MVP but pages have there code behind files as well which are confusing me. If it is no patren is it near to some patren ? pleaes guide thanks

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • skip reading headers in matlab

    - by Paul
    I had a similar question. but what i am trying now is to read files in .txt format into matlab. My problem is with the headers. Manytimes due to errors the system rewrites the headers in the middle of file and then matlab cannot read the file. IS there a way to skip it? I know i can skip reading some characters if i know what the character is. here is the code i am using. [c,pathc]=uigetfile({'*.txt'},'Select the data','V:\data); file=[pathc c]; data= dlmread(file, ',', 1,4); this way i let the user pick the file. My files are huge typically [ 86400 125 ] so naturally it has 125 header fields or more depends on files. Thanks Because the files are so big i cannot copy , but its in format like day time col1 col2 col3 col4 ............................... 2/3/2010 0:10 3.4 4.5 5.6 4.4 ............................... .................................................................. .................................................................. and so on

    Read the article

  • Is it possible to have WAMP run httpd.exe as user [myself] instead of local SYSTEM?

    - by Olivier H
    Hello! I run a django application over apache with mod_wsgi, using WAMP. A certain URL allows me to stream the content of image files, the paths of which are stored in database. The files can be located whether on local machine or under network drive (\my\network\folder). With the development server (manage.py runserver), I have no trouble at all reading and streaming the files. With WAMP, and with network drive files, I get a IOError : obviously because the httpd instance does not have read permission on said drive. In the task manager, I see that httpd.exe is run by SYSTEM. I would like to tell WAMP to run the server as [myself] as I have read and write permissions on the shared folder. (eventually, the production server should be run by a 'www-admin' user having the permissions) Mapping the network shared folder on a drive letter (Z: for instance) does not solve this at all. The User/Group directives in httpd.conf do not seem to have any kind of influence on Apache's behaviour. I've also regedited : I tried to duplicate the HKLM[...]\wampapache registry key under HK_CURRENT_USER\ and rename the original key, but then the new key does not seem to be found when I cmd this > httpd.exe -n wampapache -k start or when I run WAMP. I've run out of ideas :) Has anybody ever had the same issue?

    Read the article

  • Converting WAR to EAR and other Glassfish stories

    - by Random
    Hello! I am really new in this so I hopefully don't make any terrible mistake. I apologize before hand if I have. In my project I was using tomcat and deploying WAR files. But now some bosses wants to deploy EAR files. So there we go. I first downloaded Glassfish (don't know if it's the apropiate application server for a newbie like me), instaled it and all (I even deployed the hello.war in the autodeploy _< ). Then prepared an EAR file. From what I know, I just need to create an Enterprise Application Project in Eclipse and add to the module my war file. This changes the application.xml file automatically (thanks eclipse project!). So I exported it to an EAR file and uploaded it to the glassfish server. Wonders of wonders, it doesn't work. I also tried deploying the old WAR file in this new shiny glassfish but it goes on http-404 not found error. The glassfish seems to say that my project is not in ~/domains/domain1/docroot. By the way I am using windows and I am aware of some problems between glassfish and windows due to some updating open files or such. So I have to questions: First, Am I doing the EAR package correctly? Second, Do I need to do some especial configuration to the glassfish server to deploy EAR and WAR files? Thanks!

    Read the article

  • How to get the Actual Link file location in VSS?

    - by Regi
    I use VSS and currently I am adding a link file using following code: int ShareFlags = (int)VSSFlags.VSSFLAG_RECURSNO; //Link in sourcesafe IVSSDatabase ssdb = GetVssDatabase(); Shared.Enums.SqlObjectSubType _sqlSubType = new Shared.Enums.SqlObjectSubType(); VSSItem SourceItem = ssdb.get_VSSItem(pSourceItemPath, false); //if source is a proj, recursively share the whole thing if (SourceItem.Type == (int)VSSItemType.VSSITEM_PROJECT) ShareFlags = (int)VSSFlags.VSSFLAG_RECURSYES; VSSItem DestItem = ssdb..get_VSSItem(pDestItemPath, false); //share the item DestItem.Share(SourceItem, pComment, ShareFlags); if (SourceItem.Type == (int)VSSItemType.VSSITEM_FILE) { bResult = true; } return bResult; This will works fine. My issue is that I need to find the actual link location. For example I have a Project named as Link and it contains 2 files say file1 and file2. I added a Link to my Working project (say CurrentProject). This current project have 2 files say f1 and f2. After sharing the Link project then we get the item in Current project as: $/CurrentProject/File1 $/CurrentProject/File2 $/CurrentProject/F1 $/CurrentProject/F2 Here File1 and File2 are link files. I need to get its parent (Actual) location i.e. $/Link/file1 and $/Link/File2 Is there any way to find Link files location using SourceSafeTypeLib?

    Read the article

  • How to **delete-protect** a file or folder on Windows Server 2003 and onwards using C#/Vb.Net?

    - by Steve Johnson
    Hi all, Is it possible to delete-protect a file/folder using Registry or using a custom written Windows Service in C#? Using Folder Permissions it is possible, but i am looking for a solution that even restricts the admin from deleting specific folders. The requirement is that the administrator must not be easily track the nature of protection and/or may not be able to avert it easily. Obviously all administrators will be able to revert the procedure if the technique is clearly understood. Like folder Permissions/OwnerShip Settings can easily be reset by an administrator. SO that is not an option. Folder protection software can easily be uninstalled and show clear indication that a particular folder is protected by some special kind of software. SO that too is not an option. Most antivirus programs protect folders and files in Program Dir. Windows itself doesnt allow certain files such as registry files in c:\windows\system32\config to not even copied. Such a protection is desired for folders which allowse to read and write to files but not allow deletion. Similar functionality is desired. The protection has to seemless and invisible. I do not want to use any protection features like FolderLock and Invisible secrets/PC Security and Desktop password etc. Moreover, the solution has to be something other than folder encryption. The solution has to be OS-native so ** that it may implemented ** pro grammatically using C#/VB.Net. Please help. Thanks

    Read the article

  • Unable to parse variable length string separated by delimiter

    - by Technext
    Hi, I have a problem with parsing a string, which consists only of directory path. For ex. My input string is Abc\Program Files\sample\ My output should be Abc//Program Files//sample The script should work for input path of any length i.e., it can contain any no. of subdirectories. (For ex., abc\temp\sample\folder\joe) I have looked for help in many links but to no avail. Looks like FOR command extracts only one whole line or a string (when we use ‘token’ keyword in FOR syntax) but my problem is that I am not aware of the input path length and hence, the no. of tokens. My idea was to use \ as a delimiter and then extract each word before and after it (), and put the words to an output file along with // till we reach the end of the string. I tried implementing the following but it did not work: @echo off FOR /F "delims=\" %%x in (orig.txt) do ( IF NOT %%x == "" echo.%%x//output.txt ) The file orig.txt contains only one line i.e, Abc\Program Files\sample\ The output that I get contains only: Abc// The above output contains blank spaces as well after ‘Abc//’ My desired output should be: Abc//program Files//sample// Can anyone please help me with this? Regards, Technext

    Read the article

  • Tips for using Subversion and XCode in a team project

    - by FelipeUY
    Hi to all. I've been working on an Xcode (iPhone) project with three different persons. We have the project on a Subversion repository, but we still don't completely understand some aspects of the Subversion + Xcode methodology: 1) Each time someone does a commit on a single file, it may appear or not in the project of the other developers. Even though the same person that creates the new files, it adds those files to the Repository and then it commits on those files. Why does that happens? Any suggestions? 2) Each person that is involved on the project can't do a "Commit entire project" without causing a considerable headache to the rest of the developers... any idea how this should be done?. The working methodology that we are trying to implement is that only one developer (generally the leader of the project) can Commit the entire project but he must inform the rest of the team, so everybody can be prepared to receive a message asking him to discard his changes and read the new files from the repository. I need suggestions or advice on how to handle a project with multiple developers using subversion. We have read the Subversion handbook, and many other messages on StackOverflow but I still can't find any useful advice. Thanks for any tip!

    Read the article

  • How to import 3rd party libraries

    - by Thahzan Mohomed
    I found some cool android libraries the other day and decided to try some. But I'm having trouble correctly importing the library. This is the URL of the library : https://github.com/dmytrodanylyk/android-process-button I first tried importing the library to eclipse (and move the files in java directory to src directory and set the project as library) and importing the sample to eclipse and set it to use the library project (Properties-Android-Libraries). But it didn't work. The layout files said it failed to instantiate [custom widget class]. The I tried importing the .jar file to libs directory (and update the java build path) but it didn't work either. It showed errors in the java files too. I then tried copying all the java and layout files to the sample project directory and it worked. But I'm guessing that's not the way to work with 3rd party libraries. I first thought it's some error with the library but all the other libraries I tried to import to my projects faced the same problem. Can someone walk me through how to correctly import a 3rd party library to my android project?

    Read the article

  • Experiences wanted: producing a jar artifact in IntelliJ

    - by skiaddict1
    Developing with IntelliJ 9.0.2 Community Edition, on the Mac. This is a follow-up to this post about including jar files in an artifact, which has not received any replies. I'm hoping that the reason is that somehow, in creating my artifact (or setting my project settings), I unwittingly did something which people don't tend to do, and which is causing my problem, and that by asking people here to share how they create jar artifacts and set up projects, I will discover what it is. To recap: I have a Java project which depends on two library files. I need to package up the entire thing, with the jars inlined (such that on doing jar -tfv <filename> I see ALL the classes listed, including the ones in the two libraries), into a single jar file. I can make an artifact, I can add the library files to the Output Layout pane, but I CANNOT, no matter what I do, I cannot get the "Inline Artifact" item in the context menu to be selectable (i.e. non-grey) when I right-click on one or other library file. The thing is, making a jar which contains library files as well as the project code is NOT an unusual situation in the Java world! So I figure there are lots of IntelliJ folks out there who have done what I need to do. And I would really like to hear from you folks. What project settings do you use? (be specific, please :-) And exactly how do you set up your jar artifacts? (again, as many specific details as possible, please :-) Clearly, I'd be particularly interested to hear from folks with similar setups to mine (above) who are successfully doing what I need to do. Grateful thanks in advance, folks.

    Read the article

  • hash fragments and collisions cont.

    - by Mark
    For this application I've mine I feel like I can get away with a 40 bit hash key, which seems awfully low, but see if you can confirm my reasoning (I want a small key because I want a small filename and the key will be converted to a filename): (Note: only accidental collisions a concern - no security issues.) A key point here is that the population in question is divided into groups, and a collision is only relevant if it occurs within the same group. A "group" is a directory on a user's system (the contents of files are hashed and a collision is only relevant if it occurs for files within the same directory). So with speculating roughly 100,000 potential users, say 2^17, that corresponds to 2^18 "groups" assuming 2 directories per user on average. So with a 40 bit key I can expect 2^(20+9) files created (among all users) before a collision occurs for some user somewhere. (Or IOW 2^((40+18)/2), due to the "birthday effect".) That's an average 4096 unique files created per user, for 2^17 users, before a single collision occurs for some user somewhere. And then that long again before another collision occurs somewhere (right?)

    Read the article

  • Simple Oracle File repository with folder hierarchy

    - by Ope
    I have an application that stores large amount of files (XML and binary) in folder hierarchies. Currently the main method is storing them in file system or using a legacy CMS, which we want to get rid of. The CMS supports Oracle and a customer wants to keep the files in Oracle because of enterprise policies (backup etc.) The question is: Is there a simple implementation of file repository with folder hierarchy for Oracle? What I am looking for is a small .Net component or example code (PL/SQL and/or .Net) that would have the following methods: Create, Delete, Exists Folder CRUD file Move and potentially Copy file or directory Access to files and folders with paths like "/root/folder1/folder2/file.xml" Ability to get all the files and folders in a folder and potentially also the entire directory tree Tree traversal, getting the parent, all children etc. needs to be fast. I need the implementation in .Net, but if it was just the stored procedures, I could create the .Net calling code. I have pointers to generic articles for creating hierarchies in DB, so if I need to do it from scratch, I know where to start. What I am asking here, is there already an implementation that I could take without doing this from scratch? It seems like such a generic requirement... If the answer is a CMS, Document management system or such it should be Open Source or at least quite cheap (some hundreds / server) and it should be possible to deploy it XCopy - hopefully only couple of DLL:s. I do not need - or want - a full featured big CMS with dozens of dlls and especially not an msi-installation. I have tried to google this, but the words "repository", "CMS", "file hierarchy" etc. give so many answers, the searches are pretty much useless. Thanks, OPe

    Read the article

  • JS file called twice in Wordpress

    - by Dxr Tw
    I'm trying to reduce number of requests by reducing number of JS files on WP site. I successfully combined around 7 javascript files into one (site.js). Now, I'm using a plugin which has its own JS file (pluginA.js), I want to include (pluginA.js) in that site.js. However, if I simply copy pluginA JS content to site.js and then change location to /files/site.js, firebug's NET tab shows that site.js is requested/called twice. I presume this is due to wp_enqueue_script. How can I make it not call site.js second time but just look into already loaded site.js? Maybe there's alternative to wp_enqueue_script? Plugin's php file: add_action( 'wp_enqueue_scripts', 'testplugin_scripts'); function testplugin_scripts() { /*global $testplugin_version; */ $default_selector = 'li:has(ul) > a'; $default_selector_leaf = 'li li li:not(:has(ul)) > a'; wp_enqueue_scripts('test-plugin', site_url('/files/site.js', __FILE__), array('jquery'), $testplugin_version); $params = array( 'selector' => apply_filters('testplugin_selector', $default_selector), 'selector_leaf' => apply_filters('testplugin_selector_leaf', $default_selector_leaf) ); wp_localize_script('test-plugin', 'testplugin_params', $params); }

    Read the article

  • Remove file from history completely

    - by Iain
    A colleague has done a few things I told them not to do: forked the origin repo online cloned the fork, added a file that shouldn't have been added to that local repo pushed this to their fork I've then: merged the changes from the fork and found the file I want to remove this from: my local repo the fork their local repo I have a solution for removing something from the history, taken from Remove file from git repository (history). What I need to know is, should my colleague also go through this, and will a subsequent push remove all info from the fork? (I'd like an alternative to just destroying the fork, as I'm not sure my colleague will do this) SOLUTION: This is the shortest way to get rid of the files: check .git/packed-refs - my problem was that I had there a refs/remotes/origin/master line for a remote repository, delete it, otherwise git won't remove those files (optional) git verify-pack -v .git/objects/pack/#{pack-name}.idx | sort -k 3 -n | tail -5 - to check for the largest files (optional) git rev-list --objects --all | grep a0d770a97ff0fac0be1d777b32cc67fe69eb9a98 - to check what files those are git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_names' - to remove the file from all revisions rm -rf .git/refs/original/ - to remove git's backup git reflog expire --all --expire='0 days' - to expire all the loose objects (optional) git fsck --full --unreachable - to check if there are any loose objects git repack -A -d - repacking the pack git prune - to finally remove those objects

    Read the article

  • How can I add file locations to a database after they are uploaded using a Perl CGI script?

    - by Paul K
    I have a CGI program I have written using Perl. One of its functions is to upload pics to the server. All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db? I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases. Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db. I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).

    Read the article

  • interface abstract in php real world scenario

    - by jason
    The goal is to learn whether to use abstract or interface or both... I'm designing a program which allows a user to de-duplicate all images but in the process rather then I just build classes I'd like to build a set of libraries that will allow me to re-use the code for other possible future purposes. In doing so I would like to learn interface vs abstract and was hoping someone could give me input into using either. Here is what the current program will do: recursive scan directory for all files determine file type is image type compare md5 checksum against all other files found and only keep the ones which are not duplicates Store total duplicates found at the end and display size taken up Copy files that are not duplicates into folder by date example Year, Month folder with filename is file creation date. While I could just create a bunch of classes I'd like to start learning more on interfaces and abstraction in php. So if I take the scan directory class as the first example I have several methods... ScanForFiles($path) FindMD5Checksum() FindAllImageTypes() getFileList() The scanForFiles could be public to allow anyone to access it and it stores in an object the entire directory list of files found and many details about them. example extension, size, filename, path, etc... The FindMD5Checksum runs against the fileList object created from scanForFiles and adds the md5 if needed. The FindAllImageTypes runs against the fileList object as well and adds if they are image types. The findmd5checksum and findallimagetypes are optionally run methods because the user might not intend to run these all the time or at all. The getFileList returns the fileList object itself. While I have a working copy I've revamped it a few times trying to figure out whether I need to go with an interface or abstract or both. I'd like to know how an expert OO developer would design it and why?

    Read the article

  • Python: Parsing a colon delimited file with various counts of fields

    - by Mark
    I'm trying to parse a a few files with the following format in 'clientname'.txt hostname:comp1 time: Fri Jan 28 20:00:02 GMT 2011 ip:xxx.xxx.xx.xx fs:good:45 memory:bad:78 swap:good:34 Mail:good Each section is delimited by a : but where lines 0,2,6 have 2 fields... lines 1,3-5 have 3 or more fields. (A big issue I've had trouble with is the time: line, since 20:00:02 is really a time and not 3 separate fields. I have several files like this that I need to parse. There are many more lines in some of these files with multiple fields. ... for i in clients: if os.path.isfile(rpt_path + i + rpt_ext): # if the rpt exists then do this rpt = rpt_path + i + rpt_ext l_count = 0 for line in open(rpt, "r"): s_line = line.rstrip() part = s_line.split(':') print part l_count = l_count + 1 else: # else break break First I'm checking if the file exists first, if it does then open the file and parse it (eventually) As of now I'm just printing the output (print part) to make sure it's parsing right. Honestly, the only trouble I'm having at this point is the time: field. How can I treat that line specifically different than all the others? The time field is ALWAYS the 2nd line in all of my report files.

    Read the article

  • Deserializing child elements as attributes of parent

    - by LloydPickering
    I have XML files which I need to deserialize. I used the XSD tool from Visual Studio to create c# object files. the generated classes do deserialize the files except not in the way which I need. I would appreciate any help figuring out how to solve this problem. The child elements named 'data' should be attributes of the parent element 'task'. A shortened example of the XML is below: <task type="Nothing" id="2" taskOnFail="false" > <data value="" name="prerequisiteTasks" /> <data value="" name="exclusionTasks" /> <data value="" name="allowRepeats" /> <task type="Wait for Tasks" id="10" taskOnFail="false" > <data value="" name="prerequisiteTasks" /> <data value="" name="exclusionTasks" /> <data value="" name="allowRepeats" /> </task> <task type="Wait for Tasks" id="10" taskOnFail="false" > <data value="" name="prerequisiteTasks" /> <data value="" name="exclusionTasks" /> <data value="" name="allowRepeats" /> </task> </task> The Class definition I am trying to deserialize to is in the form: public class task { public string prerequisiteTasks {get;set;} public string exclusionTasks {get;set;} public string allowRepeats {get;set;} [System.Xml.Serialization.XmlElementAttribute("task")] public List<task> ChildTasks {get;set;} } The child 'task's are fine, but the generated files put the 'data' elements into an array of data[] rather than as named members of the task class as I need.

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

  • Windows Server 2008 R2 network adapter stops working, requires hard reboot

    - by Geoff Dalgas
    TL;DR version: Turns out this was a Windows Server 2008 R2 kernel networking bug. After siccing Microsoft support on it, we (eventually) got an unpublished kernel hotfix from Microsoft to address it. If you, too, are experiencing mysterious low-level network driver failures requiring a reboot/bluescreen cycle, you might want that hotfix (or maybe Service Pack 1 whenever it is released, too.) We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps. Switch Virtualization providers On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change. Switch host model We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model: http://en.wikipedia.org/wiki/Host_model http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx We did that, and.. we'll see.

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >