Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 53/66 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Grid View To Excel

    - by rahulchandran
    Hi I am trying to convert the contents of a grid View to an excel file and I am doing it using this code string attachment = "attachment; filename= " + FileName; Response.ClearContent(); Response.AddHeader("content-disposition", attachment); Response.ContentType = "application/excel"; StringWriter sw = new StringWriter(); HtmlTextWriter htw = new HtmlTextWriter(sw); gv.RenderControl(htw); Response.Write(sw.ToString()); Response.End(); The problem is I am getting some sort of html in an excel style format , theres java script in the page links etc what I want is to turn the results of my query into a comma seperated file Is that do-able for free or do I have to run the query myself get the data and write out a csv stream Thanks

    Read the article

  • Http Geocoder (Google) Accuracy level

    - by sushruth
    I am geocoding a large amount of user entered addresses and interested in the accuracy levels returned. My GOAL is to get the BEST POSSIBLE ACCURACY score for a given address. I call the geocder api following way http://maps.google.com/maps/geo?q={address}&output=csv&sensor=false&key=xx now the accuracy levels returned for same address with/without premise name q = Key Arena, 305 Harrison Street, Seattle, WA 98109 (Accuracy is 5) q = 305 Harrison Street Seattle, WA 98109 (Accuracy is 8) q = Key Arena, Seattle, WA 98109 (Accuracy is 9.) Its obvious from the above that the google servers does not return the best accuracy when street name is appended with premise/venue. the question is :) is there a way to pass the complete address ( with premise name / i.e case 1 ) and get the Max Accuracy. ( or how can tell the google server that the address is passed with premise/building name and street name) ( if you are thinking why not just use case 3, the answer is these are user entered addresses, they could enter "my moms's house" for premise, with accurate street address. in which case i want the accuracy to be 8 not 5)

    Read the article

  • Export view data programmatically in Access/SQL Server

    - by andy
    We have an Access application front-end connected to a SQL Server 2000 database. We would like to be able to programmatically export the results of some views to whatever format we can (ideally Excel, but CSV / tab delimited is fine). Up until now we've just hit F11, opened up the view, and hit File-Save As, but we're starting to get results with more than 16,000 results, which can't be exported. I'd like some sort of server side stored procedure we can trigger that will do this. I'm aware of the sp_makewebtask procedure that does this, however it requires administrative rights on the server, and for obvious reasons we can't give that to everyone. Any ideas?

    Read the article

  • Creating an Excel Template for different data size

    - by dassouki
    I created an excel template for a file i've done for a routine work calculation. The file takes data from the data logger and does some analysis on it and outputs one number regardless of the input size. The problem I'm having is i have to modify the sheet to suit the number of rows, as everyday the data logger outputs a different number of rows. there are about 15 sheets in the workbook and it's annoying to have to change everyone of them everyday. What i'd like to do input the data logger csv, and boom the result gets outputted. Is there a way through vba or not to ahieve

    Read the article

  • Trouble getting email attachment from Exchange

    - by JimR
    I am getting the error message “The remote server returned an error: (501) Not Implemented.” when I try to use the HttpWebRequest.GetResponse() using the GET Method to get an email attachment from exchange. I have tried to change the HttpVersion and don’t think it is a permissions issue since I can search the inbox. I know my credentials are correct as they are used to get HREF using the HttpWebRequest.Method = Search on the inbox (https://mail.mailserver.com/exchange/testemailaccount/Inbox/). HREF = https://mail.mailserver.com/exchange/testemailaccount/Inbox/testemail.EML/attachment.csv Sample Code: HttpWebRequest req = (System.Net.HttpWebRequest) HttpWebRequest.CreateHREF); req.Method = "GET"; req.Credentials = this.mCredentialCache; string data = string.Empty; using (WebResponse resp = req.GetResponse()) { Encoding enc = Encoding.Default; if (resp == null) { throw new Exception("Response contains no information."); } using (StreamReader sr = new StreamReader(resp.GetResponseStream(), Encoding.ASCII)) { data = sr.ReadToEnd(); } }

    Read the article

  • using subset but old variables still left

    - by user2520852
    I am working with a data set, which is basically daily usage data (let's just say variable X and Y) by different cities (about 150 cities). I have created a subset of data for only specific cities, choosing just 3 of the 150 cities. Then when I do tapply by cities, I get means for 3 cities but also get NA for all other 147 cities that was in the data set. I am using the below coding df<-read.csv(...) df_sub<-subset(df,df$City==1|df$City==3|df$City==19) X_Breakdown<-tapply(X,df_sub$City, mean, na.rm=TRUE) Print(X_Breakdown) City 1 City 2 15 NA City 3 City 4 12 NA City 5 City 6 NA NA Hope you get the idea. I would like to get a dataset that only contains the 3 cities that I'm interested in. It seems that the set of variables is encoded in R, is there a way to fix this? Kindly advise. Thanks

    Read the article

  • Export Multiple Sheets to Excel Through Browser

    - by ProfK
    I need to export multiple data tables to Excel on the clients machine, each to their own sheet. If it was just one sheet, I'd use the Excel/csv content type, but I've heard something about an XML format that can represent an entire workbook. I don't want to go down the Packaging and .xlsx route, so I need standard .xls. Our bug tracker, Gemini, used to have an export function that produced an XML file that Excel automatically opened as a multi-sheet workbook, but I can't find it. Is there still such a mechanism, and where can I find that schema?

    Read the article

  • Use LINQ to SQL results inside SQL Server stored procedure

    - by ifwdev
    Note: I'm not trying to call a SQL Server stored proc using a L2SQL datacontext. I use LINQPad for some fairly complex "reporting" that takes L2SQL output saved to an Array and is processed further. For example, it's usually much easier to do multiple levels of grouping with LINQ to Objects instead of trying to optimize a T-SQL query to run in a reasonable amount of time. What would be the easiest way to take the end result of one of these "applications" and use that in a SQL Server 2008 stored proc? The idea is to use the data for a Reporting Services Report, rather than copying and pasting into Excel (manual labor). The reports need to be accessible on the report server (not using the Report Server control in an application). I could output CSV and read that somehow via command line exec, but that seems like a hack. Thanks for your help.

    Read the article

  • Excel Spreadsheet - Best way to perform an Oracle Query on a cell

    - by Jamie
    Hi there, I have an Excel Spreadsheeet. There is a cell containing a concatenated name and surname (don't ask why), for example: Cell A2 BLOGGSJOE On this cell, I would like to run the following SQL and output it to cell A3, A4 and A5 SELECT i.id, i.forename, i.surname FROM individual i WHERE UPPER(REPLACE('" & A2 & "', ' ', '')) = UPPER(REPLACE(i.surname|| i.forename, ' ', '')) AND NVL(i.ind_efface, 'N') = 'N' Any idea how I could perform an oracle query on each cell and return the result? I have enabled an oracle datasource connection in Excel, just not sure what to do now. Is this a stupid approach, and can you recommend a better more proficient way? Thanks muchly! I lack the necessary experience in this type of thing! :-) EDIT: I am aware that I could just write a simple ruby/php/python/whatever script to loop through the excel spreadsheet (or csv file), and then perform the query etc. but i thought there might be a quick way in excel itself.

    Read the article

  • How can I deploy a Perl/Python/Ruby script without installing an interpreter?

    - by Brian G
    I want to write a piece of software which is essentially a regex data scrubber. I am going to take a contact list in CSV and remove all non-word characters and such from the person's name. This project has Perl written all over it but my client base is largely non-technical and installing Perl on Windows would not be worth it for them. Any ideas on how I can use a Perl/Python/Ruby type language without all the headaches of getting the interpreter on their computer? Thought about web for a second but it would not work for business reasons.

    Read the article

  • Getting "select permission denied" when using LINQ but my account is a sysadmin

    - by Wayne M
    I have a console app that's geared to be automatically ran as a Scheduled Task. I use LINQ to SQL to pull some data out of the database, format it into a CSV and email it to a client. All of a sudden I am getting the error "SELECT permission denied for table", but the account I'm using to connect to the database (specified in my app.config file) has the "sysadmin" server role (bad programmer, I know; I'll get around to changing it to a better account later but I want to make sure it works first). I can connect directly to the SQL database using that very same account and query the table in question without a problem, it only seems to be when using the LINQ code. Any idea what would be causing this?

    Read the article

  • Preview result of update/insert query without comitting changes to database in MySQL?

    - by Camsoft
    I am writing a script to import CSV files into existing tables within my database. I decided to do the insert/update operations myself using PHP and INSERT/UPDATE statements, and not use MySQL's LOAD INFILE command, I have good reasons for this. What I would like to do is emulate the insert/update operations and display the results to the user, and then give them the option of confirming that this is OK, and then committing the changes to the database. I'm using InnoDB database engine with support for transactions. Not sure if this helps but was thinking down the line of insert/update, query data, display to user, then either commit or rollback transaction? Any advise would be appreciated.

    Read the article

  • Powershell Select-Object from array not working

    - by Andrew
    I am trying to seperate values in an array so i can pass them to another function. Am using the select-Object function within a for loop to go through each line and separate the timestamp and value fields. However, it doesn't matter what i do the below code only displays the first select-object variable for each line. The second select-object command doesn't seem to work as my output is a blank line for each of the 6 rows. Any ideas on how to get both values $ReportData = $SystemStats.get_performance_graph_csv_statistics( (,$Query) ) ### Allocate a new encoder and turn the byte array into a string $ASCII = New-Object -TypeName System.Text.ASCIIEncoding $csvdata = $ASCII.GetString($ReportData[0].statistic_data) $csv2 = convertFrom-CSV $csvdata $newarray = $csv2 | Where-Object {$_.utilization -ne "0.0000000000e+00" -and $_.utilization -ne "nan" } for ( $n = 0; $n -lt $newarray.Length; $n++) { $nTime = $newarray[$n] $nUtil = $newarray[$n] $util = $nUtil | select-object Utilization $util $tstamp = $nTime | select-object timestamp $tstamp }

    Read the article

  • Error on SQL insert statement

    - by Ashley Stewart
    I exported a recordset from one database into a csv file, and when I try to import it into another using mysql workbench I keep this this error message: Executing SQL script in server ERROR: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' 'Lord it Over', 'Ben', '1993-03-01', 'TRC', NULL, 1983, '1999-09-01', 'NULL', '' at line 1 INSERT INTO `TRC`.`horse` (`horse_id`, `registered_name`, `stable_name`, `arrival_date`, `last_known_location`, `is_ex_racer`, `birth_year`, `death_date`, `horse_comments`, `sex`, `referral_date`, `horse_height`, `arrival_weight`, `passport_no`, `microchip_no`, `is_on_waiting_list`) VALUES (, 'Lord it Over', 'Ben', '1993-03-01', 'TRC', NULL, 1983, '1999-09-01', 'NULL', 'NULL', 'NULL', NULL, NULL, 'NULL', 'NULL', 0) SQL script execution finished: statements: 29 succeeded, 1 failed Fetching back view definitions in final form. Nothing to fetch Any help would be appreciated as their appears to be no errors as far as I can see.

    Read the article

  • Python - Finding unicode/ascii problems

    - by user330739
    Hi all, I am csv.reader to pull in info from a very long sheet. I am doing work on that data set and then I am using the xlwt package to give me a workable excel file. However, I get this error: UnicodeDecodeError: 'ascii' codec can't decode byte 0x92 in position 34: ordinal not in range(128) My question to you all is, how can I find exactly where that error is in my data set? Also, is there some code that I can write which will look through my data set and find out where the issues lie (because some data sets run without the above error and others have problems)?

    Read the article

  • Doing a ajax / json add to database, and have a "wait doing operation" icon

    - by Dejan.S
    Hi. I got a part on my page I want to improve. It's a file upload that users can add their contacts from files like excel, csv & outlook. I read the contacts and place them in the database, so what I would like to do is to have a regular icon that spins while that operation is doing that, how could I do that? Ajax? I don't want progress bar for the file upload but the operation for reading the file EDIT: I want to know how to make this work with the add to database using ajax. like should I use a updatepanel? Thanks

    Read the article

  • How do C or .NET programmers store and load strings in their programs?

    - by Ivan Ivkovic
    I've been doing PHP and stuff for the last year; I just got into a bit of C and C++. In the book I'm just reading, all the strings are actually in the code (I realize this is just for example, but just curious). My interest is — is there a common way for programmers to store strings and display them? Does .NET have some predefined way of doing this — like Android does in strings file? (In PHP, I keep them in all CSV files completely separate from code.)

    Read the article

  • Storing an arbitrary R object onto HDD?

    - by Harokitty
    I understand that we can export data matrices to csv or xlsx files. What about complex objects like lm? For example, in my work I might have a list of length 1000, each with a single lm() object. Each time I load R I have to wait a long time to populate the 1000 length list with these lm objects with a for loop or a lapply. I would rather just save the list somewhere on my HDD at the end of a session and open it at the start of the next session.

    Read the article

  • Remove objects from different environments

    - by Fred
    I have an R script file that executes a second R script via: source("../scripts/second_file.R") That second file has the following lines: myfiles <- list.files(".",pattern = "*.csv") ... rm(myfiles) When I run the master R file I get: > source("../scripts/second_file.R") Error in file.remove(myfiles) : object 'myfiles' not found and the program aborts. I think this has something to do with the environment. I looked at ?rm() pages but less than illuminating. I figure I have to give it a position argument, but not sure which.

    Read the article

  • How do you make your Java application memory efficient?

    - by Boune
    How do you optimize the heap size usage of an application that has a lot (millions) of long-lived objects? (big cache, loading lots of records from a db) Use the right data type Avoid java.lang.String to represent other data types Avoid duplicated objects Use enums if the values are known in advance Use object pools String.intern() (good idea?) Load/keep only the objects you need I am looking for general programming or Java specific answers. No funky compiler switch. Edit: Optimize the memory representation of a POJO that can appear millions of times in the heap. Use cases Load a huge csv file in memory (converted into POJOs) Use hibernate to retrieve million of records from a database Resume of answers: Use flyweight pattern Copy on write Instead of loading 10M objects with 3 properties, is it more efficient to have 3 arrays (or other data structure) of size 10M? (Could be a pain to manipulate data but if you are really short on memory...)

    Read the article

  • Transferring data to (Windows) Mobile Devices

    - by Ritu
    I created an app for Windows Mobile 6.5 and am fairly happy with it. However, if anyone else need to use this app, they will have to transfer an initial file (txt or csv) to the device. For a developer this isn't a problem but is this too much to ask of an end user? Granted, they will want to move (sync) data back to their desktop after the device's data have been updated. So how do other apps solve this problem? Do I need to provide some kind of syncing software?

    Read the article

  • C#: split a string into runs of characters, numbers and delimited strings and process it

    - by nrkn
    OK my regex is a bit rusty and I've been struggling with this particular problem... I need to split and process a string containing any number of the following, in any order: Chars (lowercase letters only) Quote delimited strings Ints The strings are pretty weird (I don't have control over them). When there's more than one number in a row in the string they're seperated by a comma. They need to be processed in the same order that they appeared in the original string. For example, a string might look like: abc20a"Hi""OK"100,20b With this particular string the resulting call stack would look a bit like: ProcessLetters( new[] { 'a', 'b', 'c' } ); ProcessInts( 20 ); ProcessLetters( 'a' ); ProcessStrings( new[] { "Hi", "OK" } ); ProcessInts( new[] { 100, 20 } ); ProcessLetters( 'b' ); What I could do is treat it a bit like CSV, where you build tokens by processing the characters one at a time, but I think it could be more easily done with a regex?

    Read the article

  • Matlab time stamps reading

    - by Paul
    Any easy way to read all the columns in Matlab? my format is date time y1 y2 y3 y4 ......................... 4/27/2010 00:3:09 34 45 45 56 ................ so on currently i am reading these with the code [c,pathc]=uigetfile({'*.txt'},'Select the data','C:\Data'); file=[pathc c]; data= dlmread(file, ',', 1,3); so needless to say i am skipping the time stamps. Was wondering if there is aeasy way to read the time stamps and plot my other colums agianst the time in hours. my files are 43200 X 30 and some are 86400 X 90 Related question : is the format same for .csv and .xls files , i mean except ofcourse xlsread

    Read the article

  • How to insert null value for numeric field of a datatable in c#?

    - by Pandiya Chendur
    Consider My dynamically generated datatable contains the following fileds Id,Name,Mob1,Mob2 If my datatable has this it get inserted successfully, Id Name Mob1 Mob2 1 acp 9994564564 9568848526 But when it is like this it gets failed saying, Id Name Mob1 Mob2 1 acp 9994564564 The given value of type String from the data source cannot be converted to type decimal of the specified target column. I generating my datatable by readingt a csv file, CSVReader reader = new CSVReader(CSVFile.PostedFile.InputStream); string[] headers = reader.GetCSVLine(); DataTable dt = new DataTable(); foreach (string strHeader in headers) { dt.Columns.Add(strHeader); } string[] data; while ((data = reader.GetCSVLine()) != null) { dt.Rows.Add(data); } Any suggestion how to insert null value for numeric field during BulkCopy in c#... EDIT: I tried this dt.Columns["Mob2"].AllowDBNull = true; but it doesn't seem to work...

    Read the article

  • Rails fixtures seem to be adding extra unexpected data

    - by Mason Jones
    Hello, all. I've got a dynamic fixture CSV file that's generating predictable data for a table in order for my unit tests to do their thing. It's working as expected and filling the table with the data, but when I check the table after the tests run, I'm seeing a number of additional rows of "blank" data (all zeros, etc). Those aren't being created by the fixture, and the unit tests are read-only, just doing selects, so I can't blame the code. There doesn't seem to be any logging done during the fixtures setup, so I can't see when the "blank" data is being inserted. Anyone ever run across this before, or have any ideas of how to log or otherwise see what the fixture setup is doing in order to trace down the source of the blank data?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >