Search Results

Search found 26256 results on 1051 pages for 'information science'.

Page 602/1051 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • What's the best way to manage error logging for exceptions?

    - by Peter Boughton
    Introduction If an error occurs on a website or system, it is of course useful to log it, and show the user a polite message with a reference code for the error. And if you have lots of systems, you don't want this information dotted around - it is good to have a single centralised place for it. At the simplest level, all that's needed is an incrementing id and a serialized dump of the error details. (And possibly the "centralised place" being an email inbox.) At the other end of the spectrum is perhaps a fully normalised database that also allows you to press a button and see a graph of errors per day, or identifying what the most common type of error on system X is, whether server A has more database connection errors than server B, and so on. What I'm referring to here is logging code-level errors/exceptions by a remote system - not "human-based" issue tracking, such as done with Jira,Trac,etc. Questions I'm looking for thoughts from developers who have used this type of system, specifically with regards to: What are essential features you couldn't do without? What are good to have features that really save you time? What features might seem a good idea, but aren't actually that useful? For example, I'd say a "show duplicates" function that identifies multiple occurrence of an error (without worrying about 'unimportant' details that might differ) is pretty essential. A button to "create an issue in [Jira/etc] for this error" sounds like a good time-saver. Just to re-iterate, what I'm after is practical experiences from people that have used such systems, preferably backed-up with why a feature is awesome/terrible. (If you're going to theorise anyway, at the very least mark your answer as such.)

    Read the article

  • Parity Initialization after putting in two new disks

    - by lbanz
    All my firmware is up to date on the server and the controllers. Storage crashed over the weekend. I rebooted it and it detected that I put in two new disks last week (I did check that both disk completed the rebuilding process last week). After it booted into the OS I see that it gave me an information message. After 18 hours it is at 54% so it is looking healthy. But I need to replace 5 more disk in the msa. Should I wait for this message to finish before replacing more disks? 785 Background parity initialization is currently queued or in progress on Logical Drive 1 (15.0 TB, RAID 5). If background parity initialization is queued, it will start when I/O is performed on the drive. When background parity initialization completes, the performance of the logical drive will improve.

    Read the article

  • XP Mode under Win 7 Professional: Windows Activation Update failure despite activated Windows

    - by Cristina
    I am trying to install Windows XP Mode from here: http://www.microsoft.com/windows/virtual-pc/download.aspx (the Hardware-Assisted Virtualization Detection Tool has given me the green for proceeding) Even though my Windows has been activated a year or so ago, the download button leads me to a splash screen saying "Windows validation required". I am next forced to download a WindowsActivationUpdate.exe which, after downloading some mysterious "update", fails with the error message "Update installation failed, error information 0x80070002" (rough translation from German). I've tried running it both normally and as Administrator. What could be the problem?

    Read the article

  • Software Installation Failure!

    - by NIKOS ANTONIOU
    I get the same error whenever I try to install software on my laptop, for example: I want to install Pavucontrol. So, I open the terminal and I type sudo apt-get install pavucontrol and my terminal output is: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libgconfmm-2.6-1c2 libglademm-2.4-1c2a libpulse-mainloop-glib0 padevchooser paman paprefs pavumeter pulseaudio-module-zeroconf The following NEW packages will be installed: libgconfmm-2.6-1c2 libglademm-2.4-1c2a libpulse-mainloop-glib0 padevchooser paman paprefs pavucontrol pavumeter pulseaudio-module-zeroconf 0 upgraded, 9 newly installed, 0 to remove and 172 not upgraded. 1 not fully installed or removed. Need to get 0B/345kB of archives. After this operation, 2044kB of additional disk space will be used. Do you want to continue [Y/n]? Y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "el_GR.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. dpkg: `ldconfig' not found on PATH. dpkg: 1 expected program(s) not found on PATH. NB: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) What is the problem and how do I fix it?

    Read the article

  • Accessing network resources via vpn connection failes

    - by LikeHoo
    I already found some information on this problem here, but I still can't get it to work. Im trying to access some network resources on my server via vpn over the net. Im using a win7 home pc here and a win server 2008 rc2 with installed ras&routing role on the server. For vpn authentication I use a local user on the server with vpn-access. This user also has the rights to access to the network resources, but it neither finds the server under network nor is it able to connect the network drives... In similar topics here I found out that using the same credentials for vpn-authentication and network resources access does not work, but using different user for access didn't work either. All of these examples I found were in an active directory structure, but I don't have an active directory here. Does anyone know how to solve this problem without having to use an active directory? Thanks

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • Philosophy behind the memento pattern

    - by TheSilverBullet
    I have been reading up on memento pattern from various sources of the internet. Differing information from different sources has left me in confusion regarding why this pattern is actually needed. The dofactory implementation says that the primary intention of this pattern is to restore the state of the system. Wiki says that the primary intention is to be able to restore the changes on the system. This gives a different impact - saying that it is possible for a system to have memento implementation with no need to restore. And that ability of restore is a feature of this. OODesign says that It is sometimes necessary to capture the internal state of an object at some point and have the ability to restore the object to that state later in time. Such a case is useful in case of error or failure. So, my question is why exactly do we use this one? Is it to save previous states - or to promote encapsulation between the Caretaker and the Memento? Why is this type of encapsulation so important? Edit: For those visiting, check out this Implementation!

    Read the article

  • Tools for analyzing performance of SQL Server/Express?

    - by Adam Crossland
    The application that I have customized and continue to support for my client is seeing dramatic performance problems in the field. Simple queries on rather small datasets take over a minute when I would expect them to complete with sub-second times. My current theory is that SQL Server Express 2005 is too limited for the rather non-trivial demands being made of it, but I am not sure how to get about gathering data that I can use to either prove my point or allow me to move on to finding another cause. Can anyone point me toward some tools that would allow me to analyze the load on this database? Information such as simultaneous connections, execution times of individual queries, memory usage, heck just any profiling data at all would be a help. Many thanks.

    Read the article

  • Web Services and code lists

    - by 0x0me
    Our team heavily discuss the issues how to handle code list in a web service definition. The design goal is to describe a provider API to query a system using various values. Some of them are catalogs resp. code lists. A catalog or code list is a set of key value pairs. There are different systems (at least 3) maintaining possibly different code lists. Each system should implement the provider API, whereas each system might have different code list for the same business entity eg. think of colors. One system know [(1,'red'),(2,'green')] and another one knows [(1,'lightgreen'),(2,'darkgreen'),(3,'red')] etc. The access to the different provider API implementations will be encapsulated by a query service, but there is already one candidate which might use at least one provider API directly. The current options to design the API discussed are: use an abstract code list in the interface definition: the web service interface defines a well known set of code list which are expected to be used for querying and returning data. Each API provider implementation has to mapped the request and response values from those abstract codelist to the system specific one. let the query component handle the code list: the encapsulating query service knows the code list set of each provider API implementation and takes care of mapping the input and output to the system specific code lists of the queried system. do not use code lists in the query definition at all: Just query code lists by a plain string and let the provider API implementation figure out the right value. This might lead to a loose of information and possibly many false positives, due to the fact that the input string could not be canonical mapped to a code list value (eg. green - lightgreen or green - darkgreen or both) What are your experiences resp. solutions to such a problem? Could you give any recommendation?

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Detecting damage done by virus

    - by user38471
    This morning after I went to college, a virus infected my PC without any user interaction at my end. When I came home my computer was completely frozen and infected with lots of trojans. I have not typed anything important since returning so keys cannot be logged. However I want to know exactly when my computer crashed from the time of infection to see what could potentially be done remotely by a hacker. The virus my pc was diagnosed with was "fakespypro" on a fully updated Windows 7 installation with firewall enabled. My computer was connected to an internal dorm room network, so probably that has had to do something with it. Any further information about how I could backtrace this virus infection or ways to discover what data might be stolen would be greatly appreciated.

    Read the article

  • Google Analytics: How long does it take users to trigger an event

    - by Stephen Ostermiller
    I implemented Google Analytics event tracking on my currency conversion website. The typical user flow is: User lands on a page about two currencies. User enters an amount to be converted. The site shows the user the value in the other currency. The JavaScript sends Google Analytics an "converted" event when the currency conversion is done. Because most of the sessions on my site are single page, the event tracking is very important to me to be able to know if users find my page useful. I'm looking for a way to be able to figure out how long it typically takes users to enter a value in the form. I expect that this data would form a bell curve with around a specific amount of time after page load. If I can't get a graph, I could make do with a median value. I would like to be able to use this as a core metric around usability testing. Is there a way to get this information out of Google Analytics?

    Read the article

  • Why can't I compare two Texture2D's?

    - by Fiona
    I am trying to use an accessor, as it seems to me that that is the only way to accomplish what I want to do. Here is my code: Game1.cs public class GroundTexture { private Texture2D dirt; public Texture2D Dirt { get { return dirt; } set { dirt = value; } } } public class Main : Game { public static Texture2D texture = tile.Texture; GroundTexture groundTexture = new GroundTexture(); public static Texture2D dirt; protected override void LoadContent() { Tile tile = (Tile)currentLevel.GetTile(20, 20); dirt = Content.Load<Texture2D>("Dirt"); groundTexture.Dirt = dirt; Texture2D texture = tile.Texture; } protected override void Update(GameTime gameTime) { if (texture == groundTexture.Dirt) { player.TileCollision(groundBounds); } base.Update(gameTime); } } I removed irrelevant information from the LoadContent and Update functions. On the following line: if (texture == groundTexture.Dirt) I am getting the error Operator '==' cannot be applied to operands of type 'Microsoft.Xna.Framework.Graphics.Texture2D' and 'Game1.GroundTexture' Am I using the accessor correctly? And why do I get this error? "Dirt" is Texture2D, so they should be comparable. This using a few functions from a program called Realm Factory, which is a tile editor. The numbers "20, 20" are just a sample of the level I made below: tile.Texture returns the sprite, which here is the content item Dirt.png Thank you very much! (I posted this on the main Stackoverflow site, but after several days didn't get a response. Since it has to do mainly with Texture2D, I figured I'd ask here.)

    Read the article

  • We are moving an Access based corporate front-end into a Web-based App

    - by Max Vernon
    We have an enterprise application with a front end written in Microsoft Access 2003 that has evolved over the past 6 years. The back end data, and a fair amount of back-end logic is contained within several Microsoft SQL Server databases. This front end app consists of around 180 forms, and over 120,000 lines of code, and interacts with VB.Net DLLs that support various critical functions used by our sales force. The current system makes use of 3 monitors to display various information; the Access app uses COM+ to control Microsoft Outlook and Internet Explorer for various purposes. The Access front end sometimes occupies 2 screens, automatically resizing itself based on Windows API-reported screen dimensions. The app also uses a Google map to present data to our agents, and allows two-way interactivity with the map through COM+ connectivity to JavaScript contained in the Google map. At the urging of senior management, we are looking to completely rewrite this application using some web-based technology, such as ASP.Net or perhaps a LAMP stack (the thinking with the LAMP stack thing is "free" is pretty cheap). We want to move to a web-based app so we can eliminate the dependency on our physical location for hiring new sales force members. Currently, our main office is full to capacity, and we need to continue growing the company. Does anyone have any thoughts on what would be the best technology to use for a web-based app of this magnitude? Keeping in mind the app is dependent on back-end services on our existing infrastructure. The app handles financial data and personal customer data, among other things. [I've looked at Best practices for moving large MS Access application towards .Net? and read the answers, and most of the comments. Interesting reading, and has some valid points, but our C.O.O. and contracted Software Architect are pushing for a full web-based app, not a .Net Windows App]

    Read the article

  • Could not load file or assembly 'System.Data.SQLite' or one of its dependencies. An attempt was made to load a program with an incorrect format.

    - by Om Talsania
    Problem Description: Could not load file or assembly 'System.Data.SQLite' or one of its dependencies. An attempt was made to load a program with an incorrect format. Likely to be reproduced when: You will usually encounter this problem when you have downloaded a sample application that is a 32-bit application targeted for ASP.NET 2.0 or 3.5, and you have IIS7 on a 64-bit OS running .NET 4.0, because the default setting for running 32-bit application on IIS7 with 64-bit OS is false. Resolution: 1. Go to IIS Management Console Start -> Administration Tools -> Internet Information Services (IIS) Manager 2. Expand your server in the left pane and go to Application Pools 3. Right click to select ‘Add Application Pool’  4. Create anew AppPool. I have named it ASP.NET v2.0 AppPool (32-bit) and selected .NET Framework v2.0.50727 because I intend to run my ASP.NET 3.5 application on it. 5. Now right click the newly created AppPool and select Advanced Settings 6. Change the property “Enable 32-Bit Applications” from False to True  7. Now select your actual web application from the left panel. Right click the web application, and go to Manage Application -> Advanced Settings  8. Change the Property “Application Pool” to your newly created AppPool.  And… the error is gone…

    Read the article

  • iis7 .net webservice 404 error

    - by agilenoob
    I have a webservice /test/Service1.asmx in the same folder as a page /test/test.aspx. The page works fine but I get the message bellow for the services in the same location. I know the file is there and the url is correct, and I have added the script module and managed handler as well. If anyone knows what I'm missing here I'd appreciate it Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test/Service1.asmx Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016 FAILED REQUEST LOG: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0)

    Read the article

  • Colored Vertical Lines upon boot and nomodeset DOES NOT fix it

    - by user2851032
    I have installed Lubuntu 13.04 on a Dell Inspiron 1501 laptop, rebooted the machine and encountered this problem. I edited the GRUB configuration to remove "quite splash" and enter "nomodeset", updated grub, and everything was fine. I could reboot the machine without any trouble. However, if I unplug the machine, wait a few seconds, and plug it back in, the problem with the colored lines comes back and nomodeset no longer helps to solve the problem. I tried using radeon.modeset=0 instead of nomodeset and that also works on multiple reboots until I unplug the machine and plug it back in. I was finally able to get around the problem by entering "radeon.exapixmaps=0" instead of radeon.modeset=0. I suppose I kind of made up that boot option using some information from an Arch Wiki page. This would work throughout reboots and even if I unplugged the laptop. It was working fine for quite a while. A few weeks later, I had some unrelated issues with the Java iced-tea plugin, and since 13.10 had just come out, I thought I would try upgrading. So upgrading didn't fix the problem with Java, and after unplugging the machine and trying to use it later, I was back to this problem with the black screen and colored vertical lines. I am completely out of ideas on what to try. It took me a week to figure out how to get it working the first time, but the solution I had isn't working anymore.

    Read the article

  • How to use second volume devide of amazon EC2

    - by Khoyendra Pande
    I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled. Now I want to use my second volume which is 9 Gim. I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx

    Read the article

  • Routing Essentials

    - by zharvey
    I'm a programmer trying to fill a big hole in my understanding of networking basics. I've been reading a good book (Networking Bible by Sosinki) but I have been finding that there is a lot of "assumed" information contained, where terms/concepts are thrown at the reader without a proper introduction to them. I understand that a "route" is a path through a network. But I am struggling with visualizing some routing-based concepts. Namely: How do routes actually manifest themselves in the hardware? Are they just a list of IP addresses that get computed at the network layer, and then executed by the transport? What kind of data exists in a so-caleld routing table? Is a routing-table just the mechanism for holding these lists of IP address (read above)? What are the performance pros/cons for having a static route, as opposed to a dynamic route?

    Read the article

  • Two Copies of "Silverlight 5 In Action" to Give Away and a FREE chapter!

    - by Dave Campbell
    I know most of you have seen my post from Tuesday where I talked about giving away 2 copies of Pete's book on Monday morning July 18th. Well... I'm repeating it, because it's a smoking deal... for the cost of an email you too can take a shot at getting Pete's latest released "Silverlight 5 In Action" free. 2 Important Pieces of Information 1) The deadline: midnight Sunday night, July 17, 2012, Arizona time... if you know me, you know I've lived here too long and am timezone stupid... so don't make me calculate it out :) 2) The how: I have a special email address for submittals: mailto:[email protected]?Subject=Giveaway. 3) oh yeah... I lied about only 2 pieces of info... number 3 ... there may be other surprises on Monday morning... 'nuff said 4) and just to pump up the volume on the book... how about a Free chapter you can read right here on Working with RSS and Atom! 5) send me an email and Stay in the 'Light!

    Read the article

  • What kind of SATA interface is on a Thinkpad X120E?

    - by Jorge Castro
    I recently ordered a Thinkpad X120E with an AMD Fusion (Zacate) chipset. I am eyeballing an SSD for it, however newer SSDs are coming out with 6Gbps SATA interfaces. I doubt such a cheap laptop has 6Gbps SATA, but I'm debating waiting the a bit longer until the Intel 510 series come out, if anything to future proof myself by putting it in this laptop and then later on when I do upgrade to a laptop with 6Gbps SATA I'll be good to go. The hardware manual mentions that the motherboard is for a "AMD Fusion E-350" but the specifications of each hardware part isn't part of the manual. Does anyone have any information on the kind of SATA controllers in Fusion laptops so I can make a better purchasing decision?

    Read the article

  • Advise on a 240,000 sqft outdoor wireless network

    - by whlspacedude
    I would be very appreciative of some advice in the purchase of equipment to provide a wireless network that covers the entire area of an outdoor arena. The area is rectangular-ish in shape. 400ft wide and 600ft long. It has 6 light towers, 1 on each of the 400 foot ends and 2 on each of the 600 foot ends. I can mount on anything and spend as much money as needed. The needs of the network would be to provide access for, up to 15 wireless HD cameras with audio, and a public-wifi network. Can someone point me in the right direction as far as equipment and antennas ? I can provide any additional information that you may need.

    Read the article

  • "Opportunity" to take over maintenance of a small internal website. What should I do?

    - by Dan
    I have been offered an "opportunity" to take over maintenance of a small internal website run by my group that provides information about schedules and photos of events the groups done. My manager sent me the link to the site and checked it out. The site looked clean and neat but loaded in ~5 seconds. I thought this was a little long considering the site really didn't contain a lot of content. This prompted me to take a look under the hood at the pages source code. To my horror it'd been totally hacked together using nested tables! I'm new so I really can't say no to this "opportunity" so what should I do with it? Every fiber of my being feels that the only correct thing to do is over hall the site using CSS, Div's, Span's and any other appropriate tags that a sane/good web developer would used to begin with instead of depending on the render incentive magic of tables. But I'd like to ask programmers with more experienced then me, who have been in this situation. What should I do? Is my only realistic option to leave the horror as is and only adjusting the content as requested? I'm really torn between good development and the corporate reality I'm part of. Is there some kind of middle ground where things can be made better even if they're not perfect? Thanks ahead of time.

    Read the article

  • Script to gather all the files ending in .log and create a tar.gz file.

    - by Oscar Reyes
    I'm currently using this script line to find all the log files from a given directory structure and copy them to another directy where I can easily compress them. find . -name "*.log" -exec cp \{\} /tmp/allLogs/ \; The problem I have, is, the directory/subdirectory information gets lost because, I'm copying only the file. For instance I have: ./product/install/install.log ./product/execution/daily.log ./other/conf/blah.log And I end up with: /tmp/allLogs/install.log /tmp/allLogs/daily.log /tmp/allLogs/blah.log And I would like to have: /tmp/allLogs/product/install/install.log /tmp/allLogs/product/execution/daily.log /tmp/allLogs/other/conf/blah.log

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >