Search Results

Search found 22668 results on 907 pages for 'command prompt'.

Page 693/907 | < Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >

  • rdoc and the "--accessor" option

    - by Brian Ploetz
    rdoc --help says: --accessor, -A accessorname[,..] comma separated list of additional class methods that should be treated like 'attr_reader' and friends. Option may be repeated. Each accessorname may have '=text' appended, in which case that text appears where the r/w/rw appears for normal accessors. Does anyone have any working examples of doing this (both the accessor method definition and the rdoc command invocation)? No matter what combination I try, my accessors will not show up in the RDoc output. Thanks.

    Read the article

  • Reading correctly alphanumeric fields into R

    - by gd047
    A tab-delimited text file, which is actually an export (using bcp) of a database table, is of that form: 102 1 01 e113c 3224.96 12 102 1 01 e185 101127.25 12 102 2 01 e185 176417.90 12 102A 3 01 e185 26261.03 12 I tried to import it in R with a command like data <- read.delim("C:\\test.txt", header = FALSE, sep = "\t") The problem is that the 3rd column which is actually a varchar field (alphanumeric) is mistakenly read as integer (as there are no letters in the entire column) and the leading zeros disappeared. The same thing happened when I imported the data directly from the database, using odbcConnect. Again that column was read as integer. str(data) $ code: int 1 1 1 1 1 1 6 1 1 8 ... How can I import such a dataset in R correctly, so as to be able to safely populate that db table again, after doing some data manipulations?

    Read the article

  • What are the default values for arch and code options when using nvcc?

    - by Auron
    When compiling your CUDA code, you have to select for which architecture your code is being generated. nvcc provides two parameters to specify this architecture, basically: arch specifies the virtual arquictecture, which can be compute_10, compute_11, etc. code specifies the real architecture, which can be sm_10, sm_11, etc. So a command like this: nvcc x.cu -arch=compute_13 -code=sm_13 Will generate 'cubin' code for devices with 1.3 compute capability. Please correct me if I'm wrong. Which I would like to know is which are the default values for these two parameters? Which is the default architecture that nvcc uses when no value for arch or code is specified?

    Read the article

  • Options for keeping models and the UI in sync (in a desktop application context)

    - by Benju
    In my experience I have only had 2 patterns work for large-scale desktop application development when trying to keep the model and UI in sync. 1-An eventbus approach via a shared eventbus command objects are fired (ie:UserDemographicsUpdatedEvent) and have various parts of the UI update if they are bound to the same user object updated in this event. 2-Attempt to bind the UI directly to the model adding listeners to the model itself as needed. I find this approach rather clunky as it pollutes the domain model. Does anybody have other suggestions? In a web application with something like JSP binding to the model is easy as you ussually only care about the state of the model at the time your request comes in, not so in a desktop type application. Any ideas?

    Read the article

  • WCF Web Service chnage wsdl name and targetNamespace

    - by Graham
    All, I'm a little new to WCF over IIS but have done some ASMX web services before. My WCF service is up and running but the helper page generated by the web service for me has the default names, i.e. the page that says: You have created a service. To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax: svcutil.exe http://localhost:53456/ServicesHost.svc?wsdl In a standard ASMX site I would use method/class attributes to give the web service a name and a namespace. When I click on the link the WSDL has: <wsdl:definitions name="SearchServices" targetNamespace="http://tempuri.org/" i.e. not the WCF Service Contract Name and Namespace from my Interface. I assume the MEX is using some kind of default settings but I'd like to change them to be the correct names. How can I do this?

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Struts 1 - How to display ActionMessages

    - by Yatendra Goel
    I am displaying ActionMessages through a JSP file by the following command: <logic:messagesPresent message="true"> <ul id="messsages"> <html:messages id="msg" message="true"> <li><bean:write name="msg"/> </li> </html:messages> </ul> </logic:messagesPresent> Now I want to display only selected messages. How can I indicate which message to display?

    Read the article

  • MS Access Print Report using VBA

    - by LanguaFlash
    I have a very VBA intensive report. When I preview it everything is great but when I print it after previewing things go wacky. I have spent many hours narrowing down the possibilities and I have conclude with a certain level of confidence that it is a but in MS Access. Up to this point my method for printing reports was to open the report using docmd.openreport "report". I then use the docmd.printout command so that I can set the page range, collation etc. Is there a way to print a report directly and still be able to set options like page rage, collate etc without doing a preview first? Thanks, Jeff

    Read the article

  • Problem with StandardOutput stream in async mode.

    - by JF
    Hi everyone. I have a program that launches command line processes in async mode, using BeginOutputReadLine. My problem is that the .Exited event is triggered when there is still some .OutputDataReceived events being triggered. What I do in my .Exited event must happen only once all my .OutputDataReceived events are done, or I'll be missing some output. I looked in the Process class to see if anything could be useful to me, as to wait for the stream to be empty, but all I find is for sync mode only. Can any of you help? Thanx.

    Read the article

  • implementing ioctl() commands in FreeBSD

    - by thecoffman
    I am adding some code to an existing FreeBSD device driver and I am trying to pass a char* from user space to the driver. I've implemented a custom ioctl() command using the _IOW macro like so: #define TIBLOOMFILTER _IOW(0,253,char*) My call looks something like this: int file_desc = open("/dev/ti0", O_RDWR); ioctl(file_desc, TIBLOOMFILTER, (*filter).getBitArray()); close(file_desc); When I call ioctl() I get: Inappropriate ioctl for device as an error message. Any guess as to what may be doing wrong? I've defined the same macro in my device driver, and added it to the case statement.

    Read the article

  • Screenshot of the Nexus One from adb?

    - by Marcus
    My goal is to be able to type a one word command and get a screenshot from a rooted Nexus One attached by USB. So far, I can get the framebuffer which I believe is a 32bit xRGB888 raw image by pulling it like this: adb pull /dev/graphics/fb0 fb0 From there though, I'm having a hard time getting it converted to a png. I'm trying with ffmpeg like this: ffmpeg -vframes 1 -vcodec rawvideo -f rawvideo -pix_fmt rgb8888 -s 480x800 -i fb0 -f image2 -vcodec png image.png That creates a lovely purple image that has parts that vaguely resemble the screen, but it's by no means a clean screenshot.

    Read the article

  • Tracking object entries when "playing" a Windows Enhanced Metafile

    - by lzcd
    One of my current projects requires that I work out what colours are being used in an EMF file. I have been able to successfully whip up a file parser in C# that notes all references to colours... but haven't had any luck tracking which objects are in use across the entire file so I can apart colours that are referenced from colours that are used to paint on screen. The older style WMF files are easy as the object library starts at zero and one can simply track each "Create Object" style command... but EMF files are proving to be trickier as there seems to be preexisting entries in the library (if the "Select Object" commands I'm seeing are to be believed). Would anyone be able to either enlighten me on how to track objects in the library correctly with EMF files... or suggest an easier alternative to work out which colours are actually being used in the file (as opposed to just being defined)?

    Read the article

  • Trouble updating my datagrid in WPF

    - by wrigley06
    As the title indicates, I'm having trouble updating a datagrid in WPF. Basically what I'm trying to accomplish is a datagrid, that is connected to a SQL Server database, that updates automatically once a user enters information into a few textboxes and clicks a submit button. You'll notice that I have a command that joins two tables. The data from the Quote_Data table will be inserted by a different user at a later time. For now my only concern is getting the information from the textboxes and into the General_Info table, and from there into my datagrid. The code, which I'll include below compiles fine, but when I hit the submit button, nothing happens. This is the first application I've ever built working with a SQL Database so many of these concepts are new to me, which is why you'll probably look at my code and wonder what is he thinking. public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } public DataSet mds; // main data set (mds) private void Window_Loaded_1(object sender, RoutedEventArgs e) { try { string connectionString = Sqtm.Properties.Settings.Default.SqtmDbConnectionString; using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); //Merging tables General_Info and Quote_Data SqlCommand cmd = new SqlCommand("SELECT General_Info.Quote_ID, General_Info.Open_Quote, General_Info.Customer_Name," + "General_Info.OEM_Name, General_Info.Qty, General_Info.Quote_Num, General_Info.Fab_Drawing_Num, " + "General_Info.Rfq_Num, General_Info.Rev_Num, Quote_Data.MOA, Quote_Data.MOQ, " + "Quote_Data.Markup, Quote_Data.FOB, Quote_Data.Shipping_Method, Quote_Data.Freight, " + "Quote_Data.Vendor_Price, Unit_Price, Quote_Data.Difference, Quote_Data.Vendor_NRE_ET, " + "Quote_Data.NRE, Quote_Data.ET, Quote_Data.STI_NET, Quote_Data.Mfg_Time, Quote_Data.Delivery_Time, " + "Quote_Data.Mfg_Name, Quote_Data.Mfg_Location " + "FROM General_Info INNER JOIN dbo.Quote_Data ON General_Info.Quote_ID = Quote_Data.Quote_ID", connection); SqlDataAdapter da = new SqlDataAdapter(cmd); DataTable dt = new DataTable(); da.Fill(dt); MainGrid.ItemsSource = dt.DefaultView; mds = new DataSet(); da.Fill(mds, "General_Info"); MainGrid.DataContext = mds.Tables["General_Info"]; } } catch (Exception ex) { MessageBox.Show(ex.Message); } // renaming column names from the database so they are easier to read in the datagrid MainGrid.Columns[0].Header = "#"; MainGrid.Columns[1].Header = "Date"; MainGrid.Columns[2].Header = "Customer"; MainGrid.Columns[3].Header = "OEM"; MainGrid.Columns[4].Header = "Qty"; MainGrid.Columns[5].Header = "Quote Number"; MainGrid.Columns[6].Header = "Fab Drawing Num"; MainGrid.Columns[7].Header = "RFQ Number"; MainGrid.Columns[8].Header = "Rev Number"; MainGrid.Columns[9].Header = "MOA"; MainGrid.Columns[10].Header = "MOQ"; MainGrid.Columns[11].Header = "Markup"; MainGrid.Columns[12].Header = "FOB"; MainGrid.Columns[13].Header = "Shipping"; MainGrid.Columns[14].Header = "Freight"; MainGrid.Columns[15].Header = "Vendor Price"; MainGrid.Columns[16].Header = "Unit Price"; MainGrid.Columns[17].Header = "Difference"; MainGrid.Columns[18].Header = "Vendor NRE/ET"; MainGrid.Columns[19].Header = "NRE"; MainGrid.Columns[20].Header = "ET"; MainGrid.Columns[21].Header = "STINET"; MainGrid.Columns[22].Header = "Mfg. Time"; MainGrid.Columns[23].Header = "Delivery Time"; MainGrid.Columns[24].Header = "Manufacturer"; MainGrid.Columns[25].Header = "Mfg. Location"; } private void submitQuotebtn_Click(object sender, RoutedEventArgs e) { CustomerData newQuote = new CustomerData(); int quantity; quantity = Convert.ToInt32(quantityTxt.Text); string theDate = System.DateTime.Today.Date.ToString("d"); newQuote.OpenQuote = theDate; newQuote.CustomerName = customerNameTxt.Text; newQuote.OEMName = oemNameTxt.Text; newQuote.Qty = quantity; newQuote.QuoteNumber = quoteNumberTxt.Text; newQuote.FdNumber = fabDrawingNumberTxt.Text; newQuote.RfqNumber = rfqNumberTxt.Text; newQuote.RevNumber = revNumberTxt.Text; try { string insertConString = Sqtm.Properties.Settings.Default.SqtmDbConnectionString; using (SqlConnection insertConnection = new SqlConnection(insertConString)) { insertConnection.Open(); SqlDataAdapter adapter = new SqlDataAdapter(Sqtm.Properties.Settings.Default.SqtmDbConnectionString, insertConnection); SqlCommand updateCmd = new SqlCommand("UPDATE General_Info " + "Quote_ID = @Quote_ID, " + "Open_Quote = @Open_Quote, " + "OEM_Name = @OEM_Name, " + "Qty = @Qty, " + "Quote_Num = @Quote_Num, " + "Fab_Drawing_Num = @Fab_Drawing_Num, " + "Rfq_Num = @Rfq_Num, " + "Rev_Num = @Rev_Num " + "WHERE Quote_ID = @Quote_ID"); updateCmd.Connection = insertConnection; System.Data.SqlClient.SqlParameterCollection param = updateCmd.Parameters; // // Add new SqlParameters to the command. // param.AddWithValue("Open_Quote", newQuote.OpenQuote); param.AddWithValue("Customer_Name", newQuote.CustomerName); param.AddWithValue("OEM_Name", newQuote.OEMName); param.AddWithValue("Qty", newQuote.Qty); param.AddWithValue("Quote_Num", newQuote.QuoteNumber); param.AddWithValue("Fab_Drawing_Num", newQuote.FdNumber); param.AddWithValue("Rfq_Num", newQuote.RfqNumber); param.AddWithValue("Rev_Num", newQuote.RevNumber); adapter.UpdateCommand = updateCmd; adapter.Update(mds.Tables[0]); mds.AcceptChanges(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Thanks in advance to anyone who can help, I really appreciate it, Andrew

    Read the article

  • PLS-00103: Encountered the symbol "end-of-file" in simple update block

    - by rageingnonsense
    Hello, The following Oracle statement: DECLARE ID NUMBER; BEGIN UPDATE myusername.terrainMap SET playerID = :playerID,tileLayout = :tileLayout WHERE ID = :ID END; Gives me the following error: ORA-06550: line 6, column 15: PL/SQL: ORA-00933: SQL command not properly ended ORA-06550: line 3, column 19: PL/SQL: SQL Statement ignored ORA-06550: line 6, column 18: PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: ( begin case declare end exception exit for goto if loop mod null pragma raise return select update while with <an identifier> <a double-quoted> I am pretty much at a loss. This appears to be a rather simple statement. If it helps any, I had a similar statement that performed an INSERT which used to work, but today has been giving me the same message.

    Read the article

  • How can I install ruby on rails with rvm?

    - by yeahthatguyrightthere
    I've been searching around on how to install it through the terminal on my mac. I'm using snow leopard. When I use the command: rvm install 1.9.3 I've also followed the other procedures that led me up to this to install, right now the current version is 1.8.3 Error running './configure --prefix="/Users/jose/.rvm/usr" ', please read /Users/jose/.rvm/log/ruby-1.9.3-p125/yaml/configure.log Then it mentions something about xcode and autoreconf was not found in the PATH. Error running 'patch -F 25 -p1 -N -f <"/Users/jose/.rvm/patches/ruby/1.9.3/p125/xcode-debugopt-fix-e34840.diff"',please read /Users/jose/.rvm/log/ruby-1.9.3-p125/patch.apply.xcode-debugopt-fix-r34840.log rvm requires autoreconf to install the selected ruby interpreter however autoreconf was not found in the PATH I been trying for awhile now, and found out i need to have Xcode for snow leopard which I cannot find. So my last option will be to upgrade to lion but I don't know about upgrading. Kind of scared to upgrade and everything becomes buggy.

    Read the article

  • Creating Membership Tables, SPROCs, Views in Attached DB

    - by azamsharp
    I have AdventureWorks database mdf file in my VS 2010 project. Is there anyway I can create Membership tables inside my AdventureWorks database. I know I can detach the database attach in SQL SERVER 2008. Create the tables and then detach. But I don't have SQL SERVER 2008 and I want to see if this can be done using command line tool. I tried this but no use: aspnet_regsql.exe -d AdventureWorks.mdf -A mr -E -S .\SQLEXPRESS Update: If I right click and see the properties of the AdventureWorks.mdf database then it shows the name as "C4BE6C8DA139A060D14925377A7E63D0_64A_10\ADVENTUREWORKSWEBFORMS\ADVENTUREWORKSWEBFORMS\ADVENTUREWORKS\APP_DATA\ADVENTUREWORKS.MDF" This is interesting!

    Read the article

  • Delphi's OTA: is there a way to get active configuration while building (D2010)?

    - by Alexander
    I can ask Delphi to build all configurations at once - by clicking on "Build configurations" and invoking "Make" command: This will build all configurations, one after another. The problem is that we have an IDE expert, which must react on compilation events. We register IOTAIDENotifier80 to hook events. There are BeforeBuild and AfterBuild events - we're interested in those. IOTAProject is passed to each event. The problem is: the active configuration is never changed. I.e. if you have "Debug" configuration selected (maked in bold) - all calls to BeforeBuild/AfterBuild events will return debug configuration profile (even though IDE compiles different profiles one after another). I mean properties of IOTAProject here. I also tried to use IOTAProjectOptionsConfigurations, but its ActiveConfiguration property always return the same "bolded" profile, regardless of current compiled one. The question is: is there a way to get the "real" current profile?

    Read the article

  • How to Exclude poperties file from jar file ?

    - by Nisarg Mehta
    Hi All, I have a java application for exaple see below. myProject | |----src | | | |--main | | | |--resources | | | |--userConfig.properties | |--log4j.properties | |---target I am using Maven to build my project .I am using maven command to build jar file. mvn package -DMaven.test.skip=true I want to exclude userConfig.properties file from my jar file so i have mentioned in pom.xml as below. <excludes> <exclude>**/userConfig.properties</exclude> </excludes> but it will exclude from the target folder in which compile code resides. And Application will not run because it will not find the userConfig.properties. Can anyone help me ? Thanks Nisarg Mehta

    Read the article

  • Problem to connect to MySQL server (error #2002) in PHP

    - by Martin Sikora
    Hello, I installed ZWAMP 1.0.7 (on Windows 7), but I'm having a weird problem. I can't connect to my MySQL server from any PHP script. If I try to use MySQL command line everything works fine but PHPMyAdmin retruns error #2002. I'm not sure whether it's important or not but MySQL server is not able to create socket file. I don't know what's the problem but I think everything is configured in my.cnf properly. Do you have any ideas?

    Read the article

  • Eclipse 3.5 and Ubuntu 9.10, subversion client does not work

    - by Cédric Girard
    Hi, I had installed Eclipse 3.5 Yoxos on my Ubuntu 8.04 for month, and run fine. I had upgraded to 9.10 last week, and the subversion plugin does not work since upgrade. When I try to update or commit, Subversion work for hours without any progress in console or progress bars. I can delete files or add them to SVN, but commands wich involve network just hang. SVN run fine using command line. I have already patched the GDK problem. Since this I can cancel update/commit without crashing Eclipse. Regards Cédric

    Read the article

  • MySQL LOAD DATA LOCAL INFILE example in python?

    - by David Perron
    I am looking for a syntax definition, example, sample code, wiki, etc. for executing a LOAD DATA LOCAL INFILE command from python. I believe I can use mysqlimport as well if that is available, so any feedback (and code snippet) on which is the better route, is welcome. A Google search is not turning up much in the way of current info The goal in either case is the same: Automate loading hundreds of files with a known naming convention & date structure, into a single MySQL table. David

    Read the article

  • Failed to Kill Process in SQL 2008

    - by Andrea.Ko
    I have a process with the following information, and i execute the kill process to kill this id, and it return me "Only user processes can be killed." SPID:11 Status:BACKGROUND Login:sa HostName: . BlkBy: . DBName: SAFEMIG Command:CHECKPOINT Normally, all the session to login to this server, it should have a HostName which display our PC name, but this connection is with a dot, so not sure who is executing what process that have this connection. I execute "dbcc inputbuffer(11)" It return me"EventType= No Event, Parameters = 0 and EventInfo=Null" Appreciate for any help\advice on this problem!

    Read the article

  • What's the best version control system for handling projects with graphics?

    - by acrosman
    I'm part of a small team (usually just two people), I handle the code, he handles the graphic design. In the past I've used CVS to handle version control of the code files, and while we've included the graphics in the repository, he hasn't derived nearly as much value from it as I have. Are there other packages that provide the better features for supporting graphics? The system would need to have an easy to use GUI interface, as I don't think it's fair to expect a graphic designer to learn command-line tools. Additional aspect: The client software needs to run smoothly on OS X (for the designer), and Windows (for the programmer).

    Read the article

  • Running a shellscript from a C++ application and check if it succeeds

    - by Koning Baard
    I am creating an interpreter for my extension to HQ9+, which has the following extra command called V: V: Interpretes the code as Lua, Brainfuck, INTERCAL, Ruby, ShellScript, Perl, Python, PHP in that order, and if even one error has occoured, run the HQ9+-ABC code again most of them have libraries, BF and INTERCAL can be interpreted without a library, but the problem lies in ShellScript. How can I run a shellscript from my C++ application ( =the HQ9+-ABC interpreter) and when it's done, get the error code (0 = succeded, all others = failed)? So something like this: system(".tempshellscript738319939474"); if(errcode != 0) { (rerun code); } can anyone help me? Thanks

    Read the article

  • bash : recursive listing of all files problem

    - by Michael Mao
    Run a recursive listing of all the files in /var/log and redirect standard output to a file called lsout.txt in your home directory. Complete this question WITHOUT leaving your home directory. An: ls -R /var/log/ /home/bqiu/lsout.txt I reckon the above bash command is not correct. This is what I've got so far: $ ls -1R .: cal.sh cokemachine.sh dir sort test.sh ./dir: afile.txt file subdir ./dir/subdir: $ ls -R | sed s/^.*://g cal.sh cokemachine.sh dir sort test.sh afile.txt file subdir But this still leaves all directory/sub-directory names (dir and subdir), plus a couple of empty newlines How could I get the correct result without using Perl or awk? Preferably using only basic bash commands(this is just because Perl and awk is out of assessment scope)

    Read the article

< Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >