Search Results

Search found 856 results on 35 pages for 'spreadsheet'.

Page 23/35 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Concatenate cells that change daily automatically?

    - by Harold
    I use concatenate to pull data together from different cells in my spreadsheet. Since my data changes daily, I want the formula to also change daily without having to manually input the new cell in the concatenate formula. I am looking for a way to do this but not sure how. Can anyone out there help me out please!? I appreciate the assistance in advance! Maybe this will help to explain what I need. I have a row of data from D4:AH4 that I insert daily based on the new day. When I use the concatenate and us the following formula: =CONCATENATE(TEXT('Raw Data'!B4,"m/d")," ",TEXT('Raw Data'!C4,"")," ", TEXT('Raw Data'!E4,"0.0%"))... E4 being the cell that changes daily where next day would be F4, G4, etc... All other parts of the formula will stay the same. I hope this helps! Thanks! :)

    Read the article

  • Should my program "be lenient" in what it accepts and "discard faulty input silently"?

    - by romkyns
    I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a "be lenient" comment upvoted 137 times (as of today). In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept will lead to this. The second part of the maxim is "discard faulty input silently, without returning an error message unless this is required by the specification", and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean. So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean? Taken to the extreme, if Excel followed this maxim and I gave it an exe file to open, it would just show a blank spreadsheet without even mentioning that anything went wrong. Is this really a good principle to follow?

    Read the article

  • Goal Tracking data seems to be inaccurate?

    - by Khuram Malik
    I setup some Goal Tracking about one week ago. I had multiple goals in one set. The goal itself was the "send" button being pressed on the callback form (i did that by pushing a pageview to Google Analytics everytime the send button is pressed) For each goal, i listed the first step as a required step. So for example, the ILR Page was step 1 and set as required and the goal was "/CallbackFormFilled" Looking at the stats a week later i'm getting some very inflated numbers especially when comparing them to my manually filled excel spreadsheet and i'm struggling to understand the cause of this behaviour. I'm unable to attach screenshots unfortunately since my StackExchange account for this site is brand new My own thoughts My own thoughts were that maybe its because i have setup multiple goals with the same end goal URL, but i thought that was a valid setup since i want to track multiple routes so to speak(?) I've disabled all other goals for now to confirm this, but im waiting for stats to come in as i write this. I also wonder if the contact form im using in Wordpress is causing a problem, but i've simply added one javascript line on the send button that pushes a pageview so not sure if that should cause an issue. Here is a link to setting up analytics on this contact form plugin in wordpress for reference: (see javascript action hook section) - http://ideasilo.wordpress.com/2009/05/31/contact-form-7-1-10/

    Read the article

  • Vlookup to retrieve an ID from table using text match

    - by Federico Giust
    I've got an excel spreadsheet where I would normally use a VLOOKUP. In this case I need to find the ID of the record when comparing email addresses, so the email address is the unique id here. For example on sheet 1 A B C D Person Id | Family Name | First Name | Email #N/A | Doe | John | [email protected] On Sheet 2 A B C D Person Id | Family Name | First Name | Email 12345 | Doe | John | [email protected] Basically on sheet 1 I've got 800 records, on sheet 2 450. I know the 450 are in Sheet 1, so I need to find the ids of those, and put them on sheet 1 where I've got lots more data for each person. What I've tried so far is a VLOOKUP, but I keep getting an error. I'd like to do it with some sort of formula and not using any copy paste and remove duplicates. Any ideas?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Openoffice.org: Mouse wheel one row at a time possible?

    - by Maksee
    I noticed this in Excel, now in OpenOffice.org calc. One small change in mouse wheel leads to 3 rows (line) change. Is it possible to change in Calc OR/AND in Excel? EDIT: Yes, I know about system wide setting about the number of lines for one notch of scroll. But in some applications this setting is interpreted related to size in pixels so scrolling is predictable, but for some like spreadsheet is not. Since the height of line in a cell differ depending of the cell content, with other setting as 1 you will have a guarantee of unpredictable content before your eyes with only one notch.

    Read the article

  • pull sql query execution location from either the sql server or IIS

    - by jon3laze
    I am working on restructuring the database for a project that has hundreds of classic asp pages. I need to be able to find out which pages are executing which queries so that I can analyze the data. I am hoping there is some way to accomplish this without having to manually open each asp page and copy/paste the queries into a spreadsheet. I would imagine this should be something I could pull from possibly logs? Any info is appreciated. IIS 7 MSSQL 2008 R2 Windows Web Server 2008 build 6001

    Read the article

  • Excel Graph: How can I turn data below in to a 'time based' graph

    - by Mike
    In my spreadsheet I am collecting time periods when certain values have been changed. The user is restricted to 4 time periods. I would like to show the data based on thos time periods. I've included a mock up' of the data and the type of graph I would like to create. I've tried to create it for the last hour but am obviously missing something so thought I'd ask around. http://i48.tinypic.com/55lezr.jpg Many thanks for any help Mike P.S How do I make this image appear in the message and not as a link?

    Read the article

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • multiple count Pivot table in Excel

    - by Sivakanesh
    Hi all, I'm trying to put togeter a pivot table from an Excel spreadsheet. The spreadsheets look similar to the following: DeptHead, Emp, Increment x, A, 2.5% x, B, y, C, 1.5% y, D, y, E, 2.0% I would like to make a pivot table that looks like the following; DeptHead, CountOfEmp, CountOfIncrement x, 2, 1 y, 3, 2 So it provides a count of total number of Emps and total number Increments for each DeptHead ignoring the blanks. I have tried to do this in many ways in Pivot table, but the two counts are only appearing in rows and not in columns as above. Is there any way to achieve this please? Thanks

    Read the article

  • What is needed to invoke LibreOffice running just the macro without the GUI?

    - by C.W.Holeman II
    Invoking LibreOffice and running a macro via the GUI works as expected producing three HTML files, one for each spreadsheet page: $ libreoffice x.ods Tools>Macros>Run Macros... Library: LibreOffice Macros> ExportSheetsToHTML Macro Names: exportsheetstohtml.js Run When attempting to invoke just the macro it just hangs: $ libreoffice\ -invisible\ -nofirststartwizard\ -headless\ -norestore\ x.ods "macro:///LibreOffice Macros.ExportSheetsToHTML.exportsheetstohtml.js" $ ps x | grep libreoffice 11286 pts/0 S+ 0:00 /bin/sh /opt/libreoffice/program/soffice -invisible -nofirststartwizard -headless -norestore x.ods macro:///LibreOffice Macros.ExportSheetsToHTML.exportsheetstohtml.js 11296 pts/0 Sl+ 0:58 /opt/libreoffice/program/soffice.bin -invisible -nofirststartwizard -headless -norestore x.ods macro:///LibreOffice Macros.ExportSheetsToHTML.exportsheetstohtml.js Version info: Linux road 2.6.32-28-generic #55-Ubuntu SMP Mon Jan 10 21:21:01 UTC 2011 i686 GNU/Linux LibreOffice 3.3.0 OOO330m19 (Build:6) tag libreoffice-3.3.0.4

    Read the article

  • Data Movement and the Decision Matrix

    - by BuckWoody
    Maybe it’s my military background, or maybe I’ve always had this predilection, but I like to use two devices when I need to make a complex decision: A checklist and a decision matrix. I like to use a checklist because it ensures that I remember the big bits of what I need to do, and brings up questions or areas that I didn’t think about when evaluating options for the decision. And the decision matrix – that’s the thing I use to actually lay out those options. It’s simply a spreadsheet-like grid (I use Excel, but paper and pencil works as well) that lays out the requirements or advantages for the decision across the top, and the options I have on the left-hand side. Then in the “cells” I put whether or not that option on the left will meet the requirement in that column. I then simply “weight” each cell to organize the choices by best-fit. The right answer (or answers) will float right to the top. I was asked yesterday about options for moving data in SQL Server to another system. There are just dozens of ways to do this, from bcp to Replication, each with certain advantages and costs. But asking the questions for the top row first helped me show the person that it isn’t a particular technology that is important, it’s laying out those requirements and thinking about which elements are more important than the other. For instance, is it more important to have the data moved all the time, or is it OK if that happens once in a while? Does the data have to move in two directions or just one? All of these will help that answer jump right out. Try it sometime – it’s a great learning exercise, since it will force you to focus on filling out the matrix. The answer is out there, Neo. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Excel macro: Replace enitre cell contents; replace 1 but not 10, 11, 21 etc

    - by user65678
    I need to replace a large amount of numbers with words in an Excel spreadsheet. Eg: 1 = hello 12 = goodbye 4 = cat etc. I can do it with the standard search and replace, but i have a large list to work through (about 240 number/word combos), so i figured i would use a macro. I have this: Sub findreplacer() For Each mycell In Range("A1:A1000") mycell.Replace What:="1", Replacement:="hello" mycell.Replace What:="12", Replacement:="goodbye" mycell.Replace What:="4", Replacement:="cat" Next End Sub But it replaces the 1 in 12 so the cell reads hello2 instead of goodbye. How can i make it just affect cells that only contain the specific number, the way 'match entire cell contents' works? Any help appreciated.

    Read the article

  • How can I make results of a formula values that can be filtered or use vlookup with Excel

    - by Burt
    I am having an issue in that I am using various formulas to move, split data, etc from various sources. The problem is when my final results post to the final destination that I want, I still need to either run advanced filters, or a vlookup with the results. I can’t do this because as an example if cell A1 shows a value of: A127 the actual cell content is: =RIGHT(A2,FIND(" ",A2&" ")-2) Everything I read said to copy and paste special values, but this doesn’t work for me as the idea is to have the formulas/macros run everything and eliminating cutting and pasting. In the case above I have a formula that pulls that info from a spreadsheet that is saved every week. Once it is pulled part of it is cut out in another column. I then need to run a vlookup on those results for data already contained on another tab.

    Read the article

  • What's the proper way to merge two projects in source control software

    - by Mallow
    I'm using Fossil-SCM to maintain my projects. Since I don't work in a team I usually have just a very linear branch of development: 1.0 - 1.1 - 1.2 I'm wondering what the procedure is when you have one project who's task is about to be given to a related project. And thereby rendering the first project obsolete. Although I tend to rewrite most of my code if I don't remember having already written it, I still would like to keep the code archived. And I'ld rather not have a fossil repo that just is dead. Can I merge it? Is that the proper way of handling this? For example the code was extracting data from an excel file in order to format an HTML page. Now, I've convinced my employer to move their excel spreadsheet into a database to decrease redundancy, increase efficiency and yaddy yadda. Since I can now make logical queries that don't have to jump hoops to preform using the database I won't need the extra vbs files that originally manipulated the excel file. Technically I would be porting part of the existing code into the current new project. Since it already has it's own trunk, would it be advisable to combine the trunk of a different project to this one, and how would I do that exactly?? SO I guess my tree would look like this, and I haven't seen examples of software branching that resemble this inverted tree before so I'm wondering what the norm for a situation like this?

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • How to remove Excel List Manager?

    - by jdmuys
    I have a spreadsheet with columns set as using the List Manager. I want to remove the list manager. However, the "Remove List Manager" item in the "List" menu of the List toolbar is always disabled. I tried selecting a number of different set of cells in the list, to no avail. "Remove List Manager" stays stubbornly disabled (grayed out). What am I missing? I am using Excel 2008 version 12.1.5.

    Read the article

  • Find a Certain Cell based on other Cells in Excel/Calc

    - by user77325
    I have a spreadsheet: Beans B-kg Chips C-kg 1.4oz/12 0.47544 6.5oz/20 3.679 1.48oz/12 0.502608 7oz/12 2.3772 1.86oz/12 0.631656 8oz/20 4.528 and a second sheet: Category Name Case Kg Beans 1.4oz/12 ? Beans 1.48oz/12 ? Chips 6.5oz/20 ? I am trying to match the type of product with the correct weight. So I need a formula that will choose the correct column based on the Category and then choose the correct row based on the name and output the result next to it.

    Read the article

  • Two related cells: give a value in one, calculate the other, and vice versa?

    - by Virtlink
    How can I have a cell that uses the literal value written into it, or calculates its value when no literal value was given? For example: I have two columns: column B with a price including VAT, and column C with a price without VAT. If I put a price with VAT in B2, then I want cell C2 to calculate the price without VAT based on B2. But if I put a price without VAT in C2, then I want cell B2 to calculate the price with VAT from C2. I want to give this spreadsheet to my mother, who barely understands Excel. She just has to enter the values that she knows, and the worksheet should derive the other values from that.

    Read the article

  • Advanced (?) Excel sorting

    - by Preston Grayskull
    First of all, I'd like to admit that I don't really know anything about Excel, but I have tried to look up a solution to this in Excel books and Googling. Here's what I'm trying to do: I have a really long spreadsheet There are 7 columns total, but only two columns that I'm most interested in. Here's an example CSV that is much more simple than my actual dataset, but the search/sort is analogous: John, Apple Dave, Apple Dave, Orange Steve, Apple Steve, Orange Steve, Kiwi Bob, Apple Bob, Banana I'm interested in extracting the entire rows (all of the columns) that meet the following criteria: ["Apple"] OR ["Apple" and "Orange"] NOT ["Apple" and "Orange" and Anything Else] NOT ["Apple" and Anything that isn't Orange] So with the above CSV, I would get the entire rows for John and Dave, but not Steve and not Bob. I started doing this manually, and will likely finish by the time this question has an answer, but I would like to know this for future reference. Thanks!

    Read the article

  • ignore or override current document page formatting with predefined settings automatically libreoffice

    - by alex
    When opening a document made by someone else, I would like the margins to automatically be set to 0.4 cm, the page orientation to landscape and page size to A3. My dad gets emailed a spreadsheet weekly and he prints them off. To fit them onto one page he applies these settings, which is quite laborious. I thought that there must be a quicker way of doing this! I tried creating a new default template with these settings but this only works for a new blank document. I tried to create a style to quickly apply these settings but I realised these styles are document / template specific (?) and so don't appear when opening someone else's document. Anyone have any ideas how I can do this? Thanks =]

    Read the article

  • Scripting language for filling out web form

    - by ityler22
    I have a job as an intern at a technology company, I was given the unfortunate job of performing some data entry into our web management system. The information entered into the web form is stored in a MySQL DB. Upon receiving the data I realized I would have to submit this online form about 1000 different times all consisting of about 10 different text fields / check boxes per form. (So in other words, would be completely mind numbing and be a ridiculous waste of time and resources, or so I thought...) Having used databases a good bit prior to this, my immediate reaction was to just write a short MySQL script to bulk import all of the data, especially since it was already presented to me in an excel spreadsheet ready to go. Thought it may have been some sort of a test since it seemed too obvious. I wrote the script which consisted of about 10 lines of code but was then informed I couldn't be trusted with MySQL Admin privileges to run said script. So my next thought would be to write a script to just enter the information through the web form (Which will take ten times longer but it's what I have to) Being unfamiliar with scripting of this nature (seems like I would need something similar to a bot, but the good kind) I was unsure of how to proceed to do this. Is there a preferred language to use to enter the data i have into the web form I do have access to? I'm not particularly looking for this to be done for me by any means just a nice point in the right direction as far as what scripting language to use and how to pair that with the data I have that needs to be entered. Thanks for the help/ valuable input! EDIT: Is there a way to perform this using perl without having access to place any files on the server? Would I be able to run some Javascript loops to pull the data out of .csv or just a .txt format with line delimiters and insert it into the web form?

    Read the article

  • Inserting static current time in Excel

    - by Mike Cole
    I have a time log spreadsheet. I have a new sheet for each day. In each sheet, I have a transactional record of how my time was spent. When I start or end a task, I usually type in the time ("11:00 AM" for example). Is there a shortcut to inserting the current time into a field? I'm sure it can be done with a macro, but I'm not very knowledgeable about macros. I'd like to simply highlight a field and hit some sort of shortcut key to insert a static value of the current time. Thanks for any help!

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >