Search Results

Search found 88179 results on 3528 pages for 'project file'.

Page 47/3528 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Windows HOSTS File

    - by jdp
    I have a network of computers and a server. I often need to edit the hosts file of all the computers to add a new entry or to change one. e.g. 192.168.1.101 temporary-internal-site How can I get all the machines (all windows) to 'import' or use a hosts file on a server in addition to the normal one. This means that I can just edit this one file with the new entry and all the machines will check it. Feel free to ask questions if you don't understand.

    Read the article

  • The Niantic Project: Ingress by Felicia Hajra-Lee

    Despite the current fact that the augmented reality game for Android is still in beta phase, it is amazing to see that the world of literature is already taking momentum on this 'real-life' universe. After reading 'The Alignment: Ingress' by Thomas Greanias it took only a blink of the eye to go for 'The Niantic Project: Ingress' by Felicia Hajra-Lee, too. Here is the review I posted on Amazon.com: Ingress, a parallel universe to our reality, is here. There is no doubt about this anymore... The Niantic Project, originated at the CERN collider in Geneva, Switzerland, got into focus of global players. And the game is hard; fair-play is only for the fainted ones. Felicia understands to drag the audience directly into the action of the Niantic Project and its protagonists. The novella is heavily based on the investigations posted daily on the website of Ingress. She really understands how to interweave the various clues and creates an atmosphere where it sometimes feels challenging to differentiate between fiction and reality. It all starts with 'Epiphany Night' at the Niantic Labs, the high exposure of Exotic Matter (XM) and the escape of scientist Dr. Devra Bogdanovich and 'sentinel' Roland Jarvis. Of course, a new research, or should we name it technology, like the Niantic Project has to be protected and there are multiple parties on the hunt. Throughout the various chapters Felicia introduces new potential buyers from all over the globe, gives us detailed insights on the hunters and their brutal effectiveness to finish an assignment, and manages to keep the reader in high-pitched mode thanks to a couple of turn-arounds in the overall story. Personally, I have to say that I really enjoyed reading this title. Felicia's love to details is absolutely amazing, and sometimes I was really wondering whether she would be one of the assassins. But unfortunately I also have to say that I'm not a great fan of the structural organization of the (title-less) chapters. It is fascinating to follow the ventures of Devra, Farlowe and 855 but occasionally I had to go back to previous paragraphs in order to keep track of the individual plots. Overall a great title, captivating and rich in details but simply too short. Please Fecilia, gives us more to read. As an owner of an Android smartphone or tablet, you should get yourself into the world of Ingress. Check out the Play Store to install the app. Now. ;-)

    Read the article

  • Which Opensource License i release my new Youtube video download Project so no one can ask me that this project is against youtube?

    - by John Smiith
    i have designed new opensource youtube video downloader but download videos from youtube is against youtube How can i make this type of message : This is opensource project that is used to download videos from youtube but downloading videos from youtube is against youtube and use of this project under study purpose only. Or Which Opensource License i release so that no one can ask me that this is against youtube?

    Read the article

  • Community Profile: Steve Blackwell on Fusion Middleware in Avocent's Trellis DCIM Project

    - by OTN ArchBeat
    Steve Blackwell is VP of engineering at Avocent. I had a chance to sit down with Steve during Oracle OpenWorld 2013 to ask him about Avocent's Trellis project, a three-year Data Center Infrastructure Management (DCIM) undertaking built on Oracle Fusion Middleware, including Oracle WebLogic Suite, Oracle Coherence, Oracle Complex Event Processing, and Oracle Service Bus. Steve shares a lot of background and technical detail on the project in this video, so check it out.

    Read the article

  • Xcode workspace with Unity3D as a sub-project?

    - by Di Wu
    Let's say we're developing a 2D game with Cocos2d-iPhone and UIKit and CoreAnimation. But we're also considering leveraging the 3D capabilities of Unity 3D. Is it possible that we add the Unity3D-generated Xcode project as a sub-project into the workspace and expose the 3D UI element as some kind of UIView subclass so that the native UIKit and CoreAnimation code could use them without the need to mess up with their underlying Unity3D implementation?

    Read the article

  • Compte-rendu Android LiveCode #5 : créer un jeu Android en 1 h avec Project Anarchy

    Bonjour,J'ai eu l'occasion d'assister à la LiveCode Android #5 à Paris, dans l'école d'art digital Isart, pour laquelle Stuart Johnson, développeur chez Havok, avait été invité. Au cours de celle-ci, il a utilisé Project Anarchy pour réaliser un prototype de jeu 3D Android en une heure.Voici donc le compte-rendu de la séance et la page vidéo associée.Durant cette séance, Stuart Johnson nous explique ce qu'est Project Anarchy et nous montre étape par étape comment créer un prototype de jeu en une...

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Free Online Translation Tool Hosting for Java Open Source Project [closed]

    - by Yan Cheng CHEOK
    I'm looking for a free online translation tool hosting. I wish to leverage help from open source community, to keep the language files for an open source project (Java) up to date. http://jstock.hg.sourceforge.net/hgweb/jstock/jstock/file/7871125356f7/src/org/yccheok/jstock/data Pootle seems good, as it supports Java language properties files as well. However, their official hosting site, is not opened for public registration. I was wondering, is there any free online translation tool hosting for Java open source project (Or similar) ?

    Read the article

  • Small script to look for Project Replication actions that have failed

    - by Trond Strømme
    Today when looking at a couple of projects on a ZFS 7320 Storage Appliance I noticed on one project that one of its replication actions had failed, as I hadn't checked the Recent Alerts log yet I was not aware of this. I decided to write a small script to check if there were others that had failed. Nothing fancy, just a loop through all projects, look at the project's replication child and compare the values of the last_sync and last_try properties and print the result if they're not equal. (There are probably more sensible ways of doing this, but at least it involves me getting the chance to put on my headphones and doing just a little bit of coding.) script // this script will locate failed project level replication // it will look at the sync times for 'last_sync' and 'last_try' // and compare these, if they deviate you should investigate. // NOTE! this code is offered 'as is' Run at your own risk, // it will probably work as intended, but in now way can I // (or Oracle) be held responsible if your server starts behaving // like a three year old kid in a candy store.. (not that mine do, // they are very well behaved boys...) run('configuration'); run('storage'); printf('Host: %s, pool: %s\n', get('owner'),get('pool')); run('cd /'); run('shares'); proj=list(); printf("total projects: %d\n",proj.length +'\n'); // just for project level replication for(i=0;i<proj.length;i++){ run('select '+proj[i]); run('replication'); //get all replication actions preps = list(); for(j=0;j<preps.length;j++){ run('select ' + preps[j]); last_sync = get('last_sync'); last_try = get('last_try'); // printf("target %s\n", get('target')); //why the flip does this not get the proper name? if(!( last_sync.valueOf() === last_try.valueOf())){ printf("sync has failed for %s %s\n", proj[i], get('target')); }else{ // printf("OK %s %s\n", proj[i], get('target')); } run('done'); //done with the replica action } run('done'); run('done'); } printf("finished\n"); For a more on how to run the script, or testing it please look at my previous post. Sample output: Host: elb1sn01, pool: exalogic total projects: 45 sync has failed for ACSExalogicSystem cb3a24fe-ad60-c90f-d15d-adaafd595639 finished

    Read the article

  • Adobe sort Project Parfait, un outil en ligne pour extraire le CSS de votre PSD

    Adobe Project Parfait : extraire le CSS de votre PSDAdobe signe un bel outil qui promet de rendre de bons services aux webmaster en permettant d'extraire le code CSS d'un fichier PSD dans le navigateur. Dans le navigateur? C'est à dire que la version actuelle (beta) est gratuite et ne nécessite pas de posséder Photoshop pour l'utiliser.On peut imaginer que le webmaster a reçu la maquette d'une page en PSD qu'il lui suffit d'uploader sur Project Parfait pour en extraire le CSS.L'interface intuitive...

    Read the article

  • Oracle’s Primavera Inspire for SAP: Aligning Business Priorities with Project Priorities

    Oracle’s Primavera Inspire for SAP integrates schedule, financial and resource information between Oracle’s Primavera project portfolio management applications and SAP’s enterprise resource planning solutions. Join Tracy Bowman, Principal Product Manager and learn how Primavera Inspire for SAP can help utilities and oil and gas companies’ complete projects on-time and within budget by providing them with a single access point for all project and portfolio related information.

    Read the article

  • Participate in open source project

    - by peraueb8921
    Currently, I am through a very creative phase as a developer and I think it's a good time to contribute to an open source project. Not as "permanent" developer to a project but in a "help wanted" manner in many projects. The only open source hosting services that I know of are SourceForge and CodePlex. Any suggestions that will help me on this direction? Like other sites that support this. Thanks in advance.

    Read the article

  • Certification Doesn';t Make You a Project Manager

    The Project Management Institute (PMI) and the Association of Project Management Group (APMG) are two of the biggest reasons that projects fail. They have sold the myth to the corporate world and to ... [Author: Richard Morreale - Computers and Internet - April 24, 2010]

    Read the article

  • Windows 8 Stuck on Start Screen File Search

    - by baturayd
    I have been searching someone else having the same issue but I couldn't find any. Here is my problem: I'm using Windows 8 Pro with Media Center. File search screen does not close itself after I make a file search within start screen. It stuck on that screen. I can't go back to desktop, therefore, task manager is inaccessible. Only way to get out of it is to sign out. It also looks like non-operational. It doesn't give me any result at all. It's just a blank screen with "Files" title. It used to work perfectly. Things I have done before coming here: Safe mode minimal boot to ensure no other 3rd party software interferes. sfc haven't found any inconsistencies. Search index has been rebuild. Normal boot with all 3rd party services and start-up items disabled. System restore to a date that I know this feature was functional. And by the way, I have installed all updates released so far. I strongly used file search via start menu back in Windows 7. This is an absolute game changer for me. I'm curious what causes this. I'll do a "system refresh" if I can't fix this. I'm working on it, I'll keep this thread updated should I find any fix. First update: I just discovered that file search screen gets stuck if it's invoked by typing query directly in start screen. It doesn't get stuck if it's invoked from win + x menu. I was able to "escape" from stuck screen with invoking it again by win + x menu. After rebuilding search index again, search suggestions started to appear again but still there is no file search functionality. Second update: "Results for" expression appears only for a second near to "Files" title when search is initialized. Third update: As a last resort, I finally tried to do a "System Refresh" which has also failed to refresh by giving an error after waiting almost 20 minutes at 99%. (Seriously?) After cold boot it began to undo changes. After changes were reverted, I boot the machine without doing anything further and bingo! Search began acting normal again. This is a totally weird solution. It obviously fixed certain system files during failed refresh, and kept those changes untouched because they were supposed to be that way at the first place. I'll keep this thread alive should anyone comes with a more logical explanation. Forth update: After a windows update, search functionality again stopped responding. It happened after a system update for the first time. Now I have a pretty good suspicion about system update, though I have no solid proof of its involvement with this problem.

    Read the article

  • FileInput Help/Advice

    - by user559142
    I have a fileinput class. It has a string parameter in the constructor to load the filename supplied. However it just exits if the file doesn't exist. I would like it to output a message if the file doesn't exist - but not sure how.... Here is the class: public class FileInput extends Input { /** * Construct <code>FileInput</code> object given a file name. */ public FileInput(final String fileName) { try { scanner = new Scanner(new FileInputStream(fileName)); } catch (FileNotFoundException e) { System.err.println("File " + fileName + " could not be found."); System.exit(1); } } /** * Construct <code>FileInput</code> object given a file name. */ public FileInput(final FileInputStream fileStream) { super(fileStream); } } And its implementation: private void loadFamilyTree() { out.print("Enter file name: "); String fileName = in.nextLine(); FileInput input = new FileInput(fileName); family.load(input); input.close(); }

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

  • How can I open an .xps file in Evince?

    - by Jakob
    On projects.gnome.org I read that evince/Document Viewer supports xps-files. But when I try to open an xps-file I get the error message Unable to open documentFile type Zip archive (application/zip) is not supported Reading "the full list of supported document formats" on live.gnome.org I can't find xps there. Now I ask myself (and you): Isn't Document Viewer able to open xps-files, or is there something wrong with that xps-file I try to open? I specifically want to do this with Ubuntu 11.10 Oneiric. The PPA ppa:medigeek/evince-xps has no solution for 11.10, and the xpstopdf utility mixes up the letters from my xps file totally - the new pdf then isn't usable. I want to see a solution for Evince or Gnome in general, not get a recommendation for a KDE application like here.

    Read the article

  • Java multiple connections downloading file

    - by weulerjunior
    Hello friends, I was wanting to add multiple connections in the code below to be able to download files faster. Could someone help me? Thanks in advance. public void run() { RandomAccessFile file = null; InputStream stream = null; try { // Open connection to URL. HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // Specify what portion of file to download. connection.setRequestProperty("Range", "bytes=" + downloaded + "-"); // Connect to server. connection.connect(); // Make sure response code is in the 200 range. if (connection.getResponseCode() / 100 != 2) { error(); } // Check for valid content length. int contentLength = connection.getContentLength(); if (contentLength < 1) { error(); } /* Set the size for this download if it hasn't been already set. */ if (size == -1) { size = contentLength; stateChanged(); } // Open file and seek to the end of it. file = new RandomAccessFile("C:\\"+getFileName(url), "rw"); file.seek(downloaded); stream = connection.getInputStream(); while (status == DOWNLOADING) { /* Size buffer according to how much of the file is left to download. */ byte buffer[]; if (size - downloaded > MAX_BUFFER_SIZE) { buffer = new byte[MAX_BUFFER_SIZE]; } else { buffer = new byte[size - downloaded]; } // Read from server into buffer. int read = stream.read(buffer); if (read == -1) { break; } // Write buffer to file. file.write(buffer, 0, read); downloaded += read; stateChanged(); } /* Change status to complete if this point was reached because downloading has finished. */ if (status == DOWNLOADING) { status = COMPLETE; stateChanged(); } } catch (Exception e) { error(); } finally { // Close file. if (file != null) { try { file.close(); } catch (Exception e) { } } // Close connection to server. if (stream != null) { try { stream.close(); } catch (Exception e) { } } } }

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >