Search Results

Search found 83878 results on 3356 pages for 'google data api'.

Page 574/3356 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • "Cannot allocate memory " error whle copying data from window to ubuntu

    - by John
    I have Ubuntu 9.10 installed inside VM of server 2008. WHen i try to copy the data from the network and paste insid ethe Ubuntu it says error called "Cannot allocate memory " I have 3GB RAM attached to the Ubuntu I tried above suggestion but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem???? I tried Set the following registry key to ’1': HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache and set the following registry key to ’3': HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem????

    Read the article

  • Hard Disk recovery

    - by Shaihi
    I have 3 disks of the same type model and year of production. All the disks were used part of a generic solution of an IBM server solution. My problem is that all 3 disks suffered the same malfunction at the same exact time and are now dis-functional. I went to two different expert's laboratories and got the same answer: To recover the data they need another identical disk from which they can take spare parts. Can my case really be that clinical? Anyway, I am not sure if this question belongs to this forum, but I am looking to buy the following disk: IIBM ESERVER XSERIES IBM P/N 24P3707 IBM FRU 24P3708 146.8GB USCSI 10K RPM PART NOMBER 9V2005-027 I already bought a disk with the same part number, but the labs said that apparently I need a disk that was manufactured in the same factory. That means that all the numbers have to be exactly the same. If anybody know where I can purchase such a disk (the information on the lost disks is really important to me), please tell me the place.

    Read the article

  • Hosting solution for sensitive client data

    - by Mark
    Hello, We are developing a web application that will deal with highly sensitive (financial) data of clients (audience is medium to large sized businesses). Clients will be under scrutiny from regulators & auditors and, as such, we will be too. More importantly to give clients a level of comfort our application and related hosting arrangement should instill a lot of confidence with them. We are looking into using a cloud based service like Linode, Amazon EC2, etc. To allow for maximum flexibility We are keen on putting everything on virtual servers and avoiding having to buy our own hardware. Does a cloud based service make sense for our particular scenario? If not what type of hosting should we consider? If so what should we look out for? Thanks!

    Read the article

  • Excel - Reuse a trend line to apply to other data

    - by milko
    I've obtained a trend line from a particular set of data. What I'd like to do now is to reuse this trend line to predict values from a given pair (x,y) of coordinates. To put it another way, I have one pair (x,y) that I know is correct for sure. I don't know any other point. Let's assume the behavior of this new set is similar to the one I've got the trend line from. Is there any way Excel could compute other points following this trend line?

    Read the article

  • Excel 2010 Move data from multiple columns to single row

    - by frustrated529
    So frustrating! I get data sent to me and it looks like this: a 1 a --2 2 a-------3 3 b 1 b-- 2 2 b ------ 3 3 b------------ 4 4 and i need it to look like this: a 1 2 2 3 3 b 1 2 2 3 3 4 4 I have about 30 columns that needs to move to the top value in their group, then removing the duplicates. I have been searching forums for several days and trying bits and pieces of code. I am having such a tough time with VBA!!!!

    Read the article

  • minimum required bandwidth for remote database server

    - by user66734
    I want to build a small warehousing application for my company. We have a central warehouse which distributes to 8 sales points across the country. They insist on an in-house solution. I am thinking to setup a central mySQL db Linux server and have the branches connect to it to store sales. Queries to the db from the branches will be minimum, maybe 10 per hour. However I need all the branches to be able to store each sale data ( product ID, customer ID ) in the central db at peak time at most once every five minutes. My question is can I get away with simple 24mbps/768kbps DSL lines? If not what is the bandwith requirement? Can I rely on a load balancing router to combine additional lines if needed? Can you propose some server hardware specs?

    Read the article

  • Chrome logs me out of everything when I exit--tried cookie-related stuff already

    - by GreatBigBore
    I've been using Chrome very successfully for a long time. It has always kept me logged in to all my sites even after exiting the app. Recently it started logging me out of everything when I exit Chrome. I've fooled around with all the various advanced cookie settings, and I've cycled through the options hoping that Chrome just needed a wakeup call or a reset or something. I've also deleted all the cookies in case a corrupted one is confusing Chrome. Nothing works! I see cookies when I log in, but they all go away when I exit Chrome. I've searched all over the place and seen only the standard answers relating to resetting cookies, local data, sessions, that sort of thing. Any Chrome gurus out there, please send a telepathic message to my browser asking it to resume its previous excellent behavior. Alternatively, you could suggest other possible solutions.

    Read the article

  • Tar an gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • Three disk (possibly RAID) data recovery

    - by Martin
    I have on my desk three 160 GB disks that were once part of an HP Proliant Windows 2003 Server. They may have been part of a RAID configuration of some sort. They may or may not be damaged in some way. When I interface them via USB, one of them shows up as a drive, but unformatted. The others show up as uninitialized disks in manager. An alternative explanation is that the two drives were simply not unused. What's my first step? I've recovered data off damaged drives before but never had anything to do with RAID configs. How can I even tell if any type of RAID was used?

    Read the article

  • Best archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)? Thanks

    Read the article

  • Preventing an Apache 2 Server from Logging Sensitive Data

    - by jstr
    Apache 2 by default logs the entire request URI including query string of every request. What is a straight forward way to prevent an Apache 2 web server from logging sensitive data, for example passwords, credit card numbers, etc., but still log the rest of the request? I would like to log all log-in attempts including the attempted username as Apache does by default, and prevent Apache from logging the password directly. I have looked through the Apache 2 documentation and there doesn't appear to be an easy way to do this other than completely preventing logging of these requests (using SetEnvIf). How can I accomplish this?

    Read the article

  • Archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)?

    Read the article

  • Removing extra commas in CSV without another data source

    - by fi-no
    We have a large database with customer addresses that was exported from an SQL database to CSV. In the event that a company has a comma in their name, it (predictably) throws the whole database out of whack. Unfortunately, there are so many instances of this (and commas in the second address line) that the whole CSV (~100k rows) is a huge mess. The obvious fix is to export the data again in a different, non comma reliant format, but access to that SQL database is more or less impossible at the moment... I've tried a few tools and brainstormed about combining things to fix this, but I figured asking couldn't hurt. Thanks!

    Read the article

  • Open source monitoring tool without sending data to "Their Server"

    - by hangu
    I trying to use open source server monitoring tool. I know there are a lot, but I couldn't find what I need.. the basic process of monitoring tool I used to use before was, 1) Install agent in my server which I want to monitor 2) The agent send data to "their server" 3) I can check the health of my server through web browser presented by them. What I need is, avoiding "Step 2". Are there any monitoring tool that I can use? I have Windows 2008 and Linux servers simple feature will be enough like (CPU, Memory, Network..) Thank you

    Read the article

  • WUBI installation wiped hard-drive?

    - by gkaykck
    Here is what happened, i ve installed xubuntu via wubi on my D: drive. I have 2 drives by the way C: and D: Basically i use C: drive for windows and D drive for rest and backup as everybody does. And i installed my WUBI installation on drive D: too. Than i tried to do a little extreme thing. Which is basically i tried to make a shortcut to D: folder within Xubuntu. The problem is suddenly all my files disappeared. Folders stayed same, but files disappeared. Also the drive have the files, i know because it is still full, but the thing is i cannot see any of my files. I tried checking for errors and some basic data recovery which didn't worked at all Any help?

    Read the article

  • Chrome "Sending request..." indefinitely

    - by SemMike
    After some time browsing in Chrome (30 minutes on average, varies widely), it suddenly can't receive any data from the Internet whatsoever. Trying to go to any address displays the "Sending request..." message indefinitely. This happens out of the blue, not after doing anything special. Other browsers continue to work perfectly. Quitting and relaunching Chrome always solves the problem, but nothing else works. This problem is really annoying; it has been that way for many versions, even the latest one as of now. Has anyone else experienced this problem? How can I fix it?

    Read the article

  • datagrid height issue in nested datagrid( when using three data grid)

    - by prince23
    hi, i have a nested datagrid(which is of three data grid). here i am able to show data with no issue. the first datagrid has 5 rows the main problem that comes here is when you click any row in first datagrid i show 2 datagrid( which has 10 rows) lets say i click 3 row in 2 data grid. it show further records in third datagrid. again when i click 5 row in 2 data grid it show further records in third datagrid. now all the recods are shown fine when u try to collpase the 3 row in 2 data grid it collpase but the issue is the height what the third datagrid which took space for showing the records( we can see a blank space showing between the main 2 datagrid and third data grid) in every grid first column i have an button where i am writing this code for expand and collpase this is the functionality i am implementing in all the datagrid button where i do expand collpase. hope my question is clear . any help would great private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } --------------------------------------- 2 datagrid private void btn2_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } ----------------- 3 datagrid private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } }**

    Read the article

  • Tomcat 6, JPA and Data sources

    - by asrijaal
    Hi there, I'm trying to get a data source working in my jsf app. I defined the data source in my web-apps context.xml webapp/META-INF/context.xml <?xml version="1.0" encoding="UTF-8"?> <Context antiJARLocking="true" path="/Sale"> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="20" maxIdle="10" maxWait="-1" name="Sale" password="admin" type="javax.sql.DataSource" url="jdbc:mysql://localhost/sale" username="admin"/> </Context> web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/welcomeJSF.jsp</welcome-file> </welcome-file-list> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>ruby</param-value> </context-param> </web-app> and my persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="SalePU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <non-jta-data-source>Sale</non-jta-data-source> <class>org.comp.sale.AnfrageAnhang</class> <class>org.comp.sale.Beschaffung</class> <class>org.comp.sale.Konto</class> <class>org.comp.sale.Anfrage</class> <exclude-unlisted-classes>false</exclude-unlisted-classes> </persistence-unit> </persistence> But no data source seems to be created by Tomcat, I only get this exception Exception [TOPLINK-7060] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: Cannot acquire data source [Sale]. Internal Exception: javax.naming.NameNotFoundException: Name Sale is not bound in this Context The needed jars for the MySQL driver are included into the WEB-INF/lib dir. What I'm doing wrong?

    Read the article

  • Spring.Data.NHibernate12:::Application not closing database connection(Getting max connection pool

    - by anupam3m
    Even after successful transaction.Application connection with the database persist.in Nhibernate log it shows Nhibernate Log 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - executing flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - registering flush begin 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - registering flush end 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - post flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - before transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - aggressively releasing database connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Connection.ConnectionProvider [(null)] <(null) - Closing connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Transaction.AdoTransaction [(null)] < (null) - running AdoTransaction.Dispose() 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - closing session 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.BatcherImpl [(null)] <(null) - running BatcherImpl.Dispose(true) Underneath given is my dataconfiguration file < ?xml version="1.0" encoding="utf-8" ? < objects xmlns="http://www.springframework.net" xmlns:db="http://www.springframework.net/database" xmlns:tx="http://www.springframework.net/tx"> <property name="CacheSettings" ref="CacheSettings"/> type="Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities.UpdateEntityCacheHelper, Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities" singleton="false"/ < object type="Spring.Objects.Factory.Config.PropertyPlaceholderConfigurer, Spring.Core" < property name="ConfigSections" value="databaseSettings"/ < db:provider id="AMACDbProvider" provider="OracleClient-2.0" connectionString="Data Source=RISCODEVDB;User ID=amacdevuser; Password=amacuser1234;"/> < object id="NHibernateSessionFactory" type="Spring.Data.NHibernate.LocalSessionFactoryObject,Spring.Data.NHibernate12" < property name="DbProvider" ref="AMACDbProvider"/ <value> Risco.Rsp.Ac.AMAC.CacheMappings</value> </property> <dictionary> < entry key="hibernate.connection.provider" value="NHibernate.Connection.DriverConnectionProvider" /> <entry key="hibernate.dialect" value="NHibernate.Dialect.Oracle9Dialect"/ value="NHibernate.Driver.OracleClientDriver"/ singleton="false" <property name="SessionFactory" ref="NHibernateSessionFactory" /> <property name="TemplateFlushMode" value="Auto" /> <property name="CacheQueries" value="true" /> <property name="EntityInterceptor" ref="AuditLogger"/> type="Spring.Data.NHibernate.HibernateTransactionManager, >Spring.Data.NHibernate12"> <property name="DbProvider" ref="AMACDbProvider"/> <property name="SessionFactory" ref="NHibernateSessionFactory"/> <property name="EntityInterceptor" ref="AuditLogger"/> type="Spring.Transaction.Interceptor.TransactionProxyFactoryObject,Spring.Data" <property name="PlatformTransactionManager" ref="transactionManager"/> <property name="Target" ref="EventPubSubDAO"/> <property name="TransactionAttributes"> <name-values> <add key="Save*" value="PROPAGATION_REQUIRES_NEW"/> <add key="Delete*" value="PROPAGATION_REQUIRED"/> </name-values> </property> type="Risco.Rsp.Ac.AMAC.DAO.EventPubSubMgmt.EventPubSubDAO, Risco.Rsp.Ac.AMAC.DAO.EventPubSubMgmt" < /object < tx:attribute-driven/ < /objects Please help me out with this issue.Thanks

    Read the article

  • Reading XML File

    - by Joy
    I'm developping an iphone application which uses google weather api to forecast the weather. The webservice is giving me data in the following format:- <?xml version="1.0"?> <xml_api_reply version="1"> <weather module_id="0" tab_id="0" mobile_row="0" mobile_zipped="1" row="0" section="0" > <forecast_information> <city data="Kolkata, West Bengal"/> <postal_code data="Kolkata"/> <latitude_e6 data=""/> <longitude_e6 data=""/> <forecast_date data="2010-04-28"/> <current_date_time data="2010-04-28 10:20:00 +0000"/> <unit_system data="US"/> </forecast_information> <current_conditions ><condition data="Haze"/> <temp_f data="97"/> <temp_c data="36"/> <humidity data="Humidity: 53%"/> <icon data="/ig/images/weather/haze.gif"/> <wind_condition data="Wind: S at 12 mph"/> </current_conditions> <forecast_conditions> <day_of_week data="Wed"/> <low data="82"/> <high data="91"/> <icon data="/ig/images/weather/chance_of_rain.gif"/> <condition data="Chance of Rain"/> </forecast_conditions> <forecast_conditions> <day_of_week data="Thu"/> <low data="82"/> <high data="96"/> <icon data="/ig/images/weather/rain.gif"/> <condition data="Rain"/> </forecast_conditions> <forecast_conditions> <day_of_week data="Fri"/> <low data="82"/> <high data="96"/> <icon data="/ig/images/weather/sunny.gif"/> <condition data="Clear"/> </forecast_conditions> <forecast_conditions> <day_of_week data="Sat"/> <low data="78"/> <high data="98"/> <icon data="/ig/images/weather/mostly_sunny.gif"/> <condition data="Mostly Sunny"/> </forecast_conditions> </weather> As I'm new to iPhone development so i'm facing problem while reading this using xmlparser. Please help me out of this problem. Looking forward to your valuable reply. Thanks in advance..

    Read the article

  • simplejson double escapes data causing invalid JSON string

    - by mike_hornbeck
    I have a simple form for managing manufacturers in my shop. After posting form, ajax call returns json with updated data to the form. Problem is, that the returned string is invalid. It looks like it was double-escaped. Strangely similar approach across the whole shop works without any problems. I'm also using jquery 1.6 as javascript framework. Model contains of 3 fields : char for name, text for description and image field for manufacturer logo. The function : def update_data(request, manufacturer_id): """Updates data of manufacturer with given manufacturer id. """ manufacturer = Manufacturer.objects.get(pk=manufacturer_id) form = ManufacturerDataForm(request.FILES, request.POST, instance=manufacturer) if form.is_valid(): form.save() msg = _(u"Manufacturer data has been saved.") html = [ ["#data", manufacturer_data_inline(request, manufacturer_id, form)], ["#selectable-factories-inline", selectable_manufacturers_inline(request, manufacturer_id)], ] result = simplejson.dumps({ "html": html }, cls=LazyEncoder) return HttpResponse(result) The error in console : error with invalid JSON : uncaught exception: Invalid JSON: {"html": [["#data", "\n<h2>Dane</h2>\n<div class="\&quot;manufacturer-image\&quot;">\n \n</div>\n<form action="\&quot;/manage/update-manufacturer-data/1\&quot;" method="\&quot;post\&quot;">\n \n <div class="\&quot;field\&quot;">\n <div class="\&quot;label\&quot;">\n <label for="\&quot;id_name\&quot;">Nazwa</label>:\n </div>\n \n \n <div class="\&quot;error\&quot;">\n <input id="\&quot;id_name\&quot;" name="\&quot;name\&quot;" maxlength="\&quot;50\&quot;" type="\&quot;text\&quot;">\n <ul class="\&quot;errorlist\&quot;"><li>Pole wymagane</li></ul>\n </div>\n \n </div>\n\n <div class="\&quot;field\&quot;">\n <div class="\&quot;label\&quot;">\n <label for="\&quot;id_image\&quot;">Zdjecie</label>:\n </div>\n \n \n <div>\n <input name="\&quot;image\&quot;" id="\&quot;id_image\&quot;" type="\&quot;file\&quot;">\n </div>\n \n </div>\n\n <div class="\&quot;field\&quot;">\n <div class="\&quot;label\&quot;">\n <label for="\&quot;id_description\&quot;">Opis</label>:\n </div>\n \n \n <div>\n <textarea id="\&quot;id_description\&quot;" rows="\&quot;10\&quot;" cols="\&quot;40\&quot;" name="\&quot;description\&quot;"></textarea>\n </div>\n \n </div>\n \n <div class="\&quot;buttons\&quot;">\n <input class="\&quot;ajax-save-button" button\"="" type="\&quot;submit\&quot;">\n </div>\n</form>"], ["#selectable-factories-inline", "\n <div>\n <a class="\&quot;selectable" selected\"\n="" href="%5C%22/manage/manufacturer/1%5C%22">\n L1\n </a>\n </div>\n\n <div>\n <a class="\&quot;selectable" \"\n="" href="%5C%22/manage/manufacturer/4%5C%22">\n KR3W\n </a>\n </div>\n\n <div>\n <a class="\&quot;selectable" \"\n="" href="%5C%22/manage/manufacturer/3%5C%22">\n L1TA\n </a>\n </div>\n\n"]]} Any ideas ?

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • SQLAuthority News – Whitepaper Download – Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries

    - by pinaldave
    Size of the database is growing every day. Many organizations now a days have more than TB of the Data in their system. Performance is always part of the issue. Microsoft is really paying attention to the same and also focusing on improving performance for Data Warehousing. Microsoft has recently released whitepaper on the performance tuning subject of Data Warehousing. Here is the abstract about the whitepaper from official site: In this white paper we discuss two of the new features introduced in SQL Server 2008, Star Join and Few-Outer-Row optimizations. These two features are in SQL Server 2008 R2 as well.  We test the performance of SQL Server 2008 on a set of complex data warehouse queries designed to highlight the effect of these two features and observed a significant performance gain over SQL Server 2005 (without these two features). The results observed also apply to SQL Server 2008 R2.  On average, about 75 percent of the query execution time has been reduced, compared to SQL Server 2005. We also include data that shows a reduction in the number of rows processed and improved balance in parallel queries, both of which highlight the important role the Star Join and Few Outer-Row features played. I encouraged all of those interested in Data Warehouse to read it and see if they can learn the tricks. Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Microsoft Silverlight 4 Data and Services Cookbook &ndash; Book Review (sort of)

    - by Jim Duffy
    I just received my copy of the Microsoft Silverlight 4 Data and Services Cookbook, co-authored by fellow Microsoft Regional Director, Gill Cleeren, and at first glance I like what I see. I’ve always been a fan of the “cookbook” approach to technical books because they are problem/solution oriented. Often developers need solutions to solve specific questions like “how do I send email from within my .NET application” and so on, and yes, that was a blatant plug to my article explaining how to accomplish just that, but I digress. :-) I also enjoy the cookbook approach because you can just start flipping pages and randomly stop somewhere and see what nugget of information is staring up at you from the page. Anyway, what I like about this book is that it focuses on a specific area of Silverlight development, accessing data and services.  The book is broken down into the following chapters: Chapter 1: Learning the Nuts and Bolts of Silverlight 4 Chapter 2: An Introduction to Data Binding Chapter 3: Advanced Data Binding Chapter 4: The Data Grid Chapter 5: The DataForm Chapter 6: Talking to Services Chapter 7: Talking to WCF and ASMX Services Chapter 8: Talking to REST and WCF Data Services Chapter 9: Talking to WCF RIA Services Chapter 10: Converting Your Existing Applications to Use Silverlight As you can see this book is all about working with Silverlight 4 and data. I’m looking forward to taking a closer look at it. Have a day. :-|

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >