Search Results

Search found 6110 results on 245 pages for 'graph databases'.

Page 186/245 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • How do I diff two spreadsheets?

    - by neu242
    We have a lot of spreadsheets (xls) in our subversion repository. These are usually edited with gnumeric or openoffice.org, and are mostly used to populate databases for unit testing with dbUnit. There are no easy ways of doing diffs on xls files that I know of, and this makes merging extremely tedious and error prone. I've found Spreadsheet Compare, but it requires Excel 2000 or later. I've also tried to convert the spreadsheets to xml and doing a regular diff, but it really feels as a last resort. Are there any tools for diffing two spreadsheets (xls or ods)? I am primarily looking for a multi-platform/open source tool.

    Read the article

  • How can I use multiple Datatables on my CrystalReport?

    - by Sergio Tapia
    I have a dataset that connects with three databases. How can I attach my Crystalreport viewer so all three are included? protected void Page_Load(object sender, EventArgs e) { ReportDocument X = new ReportDocument(); DataTable DTable = new DataTable(); DataSet1TableAdapters.TableAdapterManager ????? = new WebApplication1.DataSet1TableAdapters.TableAdapterManager(); DTable = ???????? string ubicacion = Server.MapPath("crystalReport1.rpt"); X.Load(ubicacion); X.SetDataSource(DTable); CrystalReportViewer1.ReportSource = X; }

    Read the article

  • Choosing connectionstring in VB.NET application at startup

    - by MatsT
    I have a VB.NET application with a connection to an SQL Server 2003. On the server there are two databases, MyDatabase and MyDatabase_Test. What I would like to do is to show a dialog when the program starts that let's the user choose which database to use. My idea is to create a new form as the starup form that sets this property and then launches the main form. Currently the connectionstring is specified in the application config file. Best would be if I can specify two different connection strings in that file to choose from, but for now it is also acceptable with other solutions like hardcoding the two connectionstrings into the startup form.

    Read the article

  • How do you access config outside of a request in CherryPy?

    - by OrganicPanda
    I've got a webapp running on CherryPy that needs to access the CherryPy config files before a user creates a request. The docs say to use: host = cherrypy.request.app.config['database']['host'] But that won't work outside of a user request. You can also use the application object when you start the app like so: ... application = cherrypy.tree.mount(root, '/', app_conf) host = application.config['database']['host'] ... But I can see no way of accessing 'application' from other classes outside of a user request. I ask because our app looks at several databases and we set them up when the app starts rather than on user request. I have a feeling this would be useful in other places too; so is there any way to store a reference to 'application' somewhere or access it through the CherryPy API?

    Read the article

  • is there a tool to see the difference between two database tables in SQL Server?

    - by reinier
    What is a good tool to see the differences between 2 tables (or even better, the datasets returned by 2 queries). EDIT: I'm not interested in the schema changes. Just assume that the schemas are the same. background as to why: I'm porting some legacy code which can fill a database with some pre-calced data. The easiest way to see if I got everything right, is to check the output of the old program, with the new one. I was thinking that if there is some kind of 'diff' tool for databases, this might be great.

    Read the article

  • How to retrieve MYSQL records as an INSERT statement.

    - by Aglystas
    I'm trying come up with the best method of synchronizing particular rows of 2 different database tables. So, for example there's 2 product tables in different databases as such... Origin Database product{ merchant_id, product_id, ... additional fields } Destination Database product{ merchant_id product_id ... additional fields } So, the database schema is the same for both. However I'm looking to select records with a particular merchant_id, remove all records from the destination table that have that merchant_id and replace those records with records from the origin database of the same merchant_id. My first thought was using mysqldump, parsing out the create table statements, and only running the Insert Statements. Seems like a pain though. So I was wondering if there is a better technique to do this. I would think mysql has some method of creating INSERT statements as output from a SELECT statement, so you can define how to insert specific record information into a new db. Any help would be appreciated, thank you much.

    Read the article

  • SQL Insert multilingual characters

    - by Usman Akram
    I am trying to create a table in my MS SQL database for Languages. I want to store an English name of Language and a local name of language in the database. i.e. Language, Language(local) English, English German, Deutsch Italian, Italiano Japanese, ??? ... ... I have 279 languages that I want to import, but when I import it shows '?????' for some like japanese, Russian and arabic etc The database Collation is Latin1_General_CI_AS. I would also like advise on multilingual websites; if i have a database of product descriptions and I want to have translation in multiple languages, should I go for separate databases or Can I have translation in one databse? (I prefer not to duplicate data!). Anything else to make sure users are able to write comments in different languages (char encoding on web?) and can be stored in database.

    Read the article

  • How to connect 2 mysql tables with 2 connection string

    - by denonth
    Hi all I need to connect 2 tables from 2 mysql databases that have 2 different connection strings and each is on the different server. I have this query: cmd = new MySqlCommand(String.Format("INSERT INTO {0} (a,b,c,d) SELECT (a,b,c,d) FROM {1}", ConfigSettings.ReadSetting("main_table"), ConfigSettings.ReadSetting("main_table")), con); So both table have the same columns. Thats why I have only one ConfigSettings.ReadSetting("main_table") for both of them as they are same. I have 2 connection strings and each is pointing to their server: con.ConnectionString = ConfigurationManager.ConnectionStrings["con1"].ConnectionString; con2.ConnectionString = ConfigurationManager.ConnectionStrings["con2"].ConnectionString; How to make this cmd to be workking with 2 different connection strings and with the same name for the table. Table name will change that's why it is saved in config.

    Read the article

  • StructureMap - Injecting a dependency into a base class?

    - by David
    In my domain I have a handful of "processor" classes which hold the bulk of the business logic. Using StructureMap with default conventions, I inject repositories into those classes for their various IO (databases, file system, etc.). For example: public interface IHelloWorldProcessor { string HelloWorld(); } public class HelloWorldProcessor : IHelloWorldProcessor { private IDBRepository _dbRepository; public HelloWorldProcessor(IDBRepository dbRepository) { _dbRepository = dbrepository; } public string HelloWorld(){ return _dbRepository.GetHelloWorld(); } } Now, there are some repositories that I'd like to be available to all processors, so I made a base class like this: public class BaseProcessor { protected ICommonRepository _commonRepository; public BaseProcessor(ICommonRepository commonRepository) { _commonRepository = commonRepository; } } But when my other processors inherit from it, I get a compiler error on each one saying that there's no constructor for BaseProcessor which takes zero arguments. Is there a way to do what I'm trying to do here? That is, to have common dependencies injected into a base class that my other classes can use without having to write the injections into each one?

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • How can I use Amazon's API in PHP to search for books?

    - by TerranRich
    I'm working on a Facebook app for book sharing, reviewing, and recommendations. I've scoured the web, searched Google using every search phrase I could think of, but I could not find any tutorials on how to access the Amazon.com API for book information. I signed up for an AWS account, but even the tutorials on their website didn't help me one bit. They're all geared toward using cloud computing for file storage and processing, but that's not what I want. I just want to access their API to search info on books. Kind of like how http://openlibrary.org/ does it, where it's a simple URL call to get information on a book (but their databases aren't nearly as populated as Amazon's). Why is it so damned hard to find the information I need on Amazon's AWS site? If anybody could help, I would greatly appreciate it.

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • How to deal with 'bad' decision forced on you regarding basic software for your product

    - by raticulin
    Here is my situation, our product used to support several of the major databases. Now management has decided to move all products to MaxDB (aka SapDB previously), and even if we keep supporting some of the previous dbs, all new installations are on MaxDB. I am sure MaxDB is a great db and can support huge SAP installations. But from the point of view of a software developper, its a nightmare. Every time you need to do something not trivial (write an stored procedure, some fancy trigger...) and you google for some info, you get like 0.1% of the hits you would with things like MySql, PostgreSql or MSSql. Mailing lists are nearly non existant. SAP does support it commercially but it is not clear wether we'll buy support. And the decision cannot be rolled back. The product works with MaxDB, but with lots of inefficiency on development and a lot of frustration, is there something one could do?

    Read the article

  • The conceptual process of populating related tables in a database (MySql) from a CSV file

    - by user322772
    I'm new to relational databases and all the material I've read covered primary and foreign keys, normal forms, and joins but left out to populate the database once it's created. How do you import a CSV file so the fields match their related table? Say you were tying to build a beer database and had a CSV file with each line as a record. Header: brewer, beer_name, country, city, state, beer_category, beer_type, alcohol_content Record 1: Anheuser-Busch, Budweiser, United States, St. Louis, Mo, Pale lager, Regular, 5.0% Record 2: Anheuser-Busch, Bud Light, United States, St. Louis, Mo, Pale lager Light, 4.2% Record 3: Miller Brewing Company, Miller Lite, United States, Milwaukee, WI, Pale lager, Light, 4.2% You can create a "Brewer" table and a "Beer" table. When importing how does you connect the primary keys between the tables?

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • HTML5 Web Database Security

    - by Daniel Dimovski
    Should the HTML5 database be used to store any form of private information? Say we have the following scenario; You're browsing a web-mail client, that uses the web database to store mail drafts after you've written some information you close the web browser. What's to stop me from getting access to this information? If the webpage tries to clean out old information when opened a user-script could easily prevent the website from fully loading and then search through the database. Furthermore the names of databases and tables are easily available through the web-mail client's source. W3C Draft

    Read the article

  • Web programming: Apache modules: mod_python vs mod_php

    - by Olivier Pons
    Hi! I've been using for more than 12 years PHP with Apache (a.k.a mod_php) for my web development work. I've recenlty discovered python and its real power (I still don't understand why this is not always the best product that becomes the most famous). I've just discovered mod_python for Apache. I've already googled but without success things like mod_python vs mod_php. I wanted to know the differences between the two mod_php and mod_python in terms of: speed productivity maintainance (I know `python is most productive and maintainable language in the world, but is it the same for Web programming with Apache) availability of features e.g, cookies and session handling, databases, protocols, etc.

    Read the article

  • Adding a MongoDB collection to Netbeans

    - by Saif Bechan
    In Netbeans I have an option to add my mysql databases to netbeans. This way I can easily browse and so small queries. Now I am working on a MongoDB project, and I want to know if it is possible to use the same functionality. I see that on the website of mongo there is a list of drivers, and I see that you can add drivers in netbeans. I do not know if the same thing, or if this can be used. I have tried google, but no luck. Anyone have an idea?

    Read the article

  • Get the newest file from directory structure year/month/date/time

    - by Radek
    I store backups of databases in a directory structure year/month/day/time/backup_name an example would be basics_mini/2012/11/05/012232/RATIONAL.0.db2inst1.NODE0000.20110505004037.001 basics_mini/2012/11/06/012251/RATIONAL.0.db2inst1.NODE0000.20110505003930.001 note that timestamp from the backup file cannot be used. Before the automation testing starts the server time is set to 5.5.2011 So the question is how I can get the latest file if I pass the "base directory" (basics_mini) to some function that I am going to code. My thoughts are that I list the base directory and sort by time to get the year. Then I do the same for month, day and time. I wonder if there is any "easier" solution to that in php.

    Read the article

  • Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a common-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? Can PEAR interface with the MySqli prepared statements I currently use? Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) Will it slow down my site once I have a significantly large member base?

    Read the article

  • Killing the mysqld process

    - by Josh K
    I have a table with ~800k rows. I ran an update users set hash = SHA1(CONCAT({about eight fields})) where 1; Now I have a hung Sequel Pro process and I'm not sure about the mysqld process. This is two questions: What harm can possibly come from killing these programs? I'm working on a separate database, so no damage should come to other databases on the system, right? Assume you had to update a table like this. What would be a quicker / more reliable method of updating without writing a separate script. I just checked with phpMyAdmin and it appears as though the query is complete. I still have Sequel Pro using 100% of both my cores though...

    Read the article

  • Accessing variable from ARGV

    - by snaken
    I'm writing a cPanel postwwwact script, if you're not familiar with the script its run after a new account is created. it relies on the user account variable being passed to the script which i then use for various things (creating databases etc). However, I can't seem to find the right way to access the variable i want. I'm not that good with shell scripts so i'd appreciate some advice. I had read somewhere that the value i wanted would be included in $ARGV{'user'} but this simply gives "root" as opposed to the value i need. I've tried looping through all the arguments (list of arguments here) like this: #!/bin/sh for var do touch /root/testvars/$var done and the value i want is in there, i'm just not sure how to accurately target it. There's info here on doing this with PHP or Perl but i have to do this as a shell script. EDIT Ideally i would like to be able to call the variable by something other than $1 or $2 etc as this would create issues if an argument is added or removed Any ideas?

    Read the article

  • Loading .sql files from within PHP

    - by Josh Smeaton
    I'm creating an installation script for an application that I'm developing and need to create databases dynamically from within PHP. I've got it to create the database but now I need to load in several .sql files. I had planned to open the file and mysql_query it a line at a time - until I looked at the schema files and realised they aren't just one query per line. So, please.. how do I load an sql file from within PHP? (as phpMyAdmin does with it's import command).

    Read the article

  • My database has been deleted suddenly on the server how to recover it?

    - by user2728312
    I'm running an application on windows server that connect to a SQL Server database. Today, when I opened SQL Server Management Studio, I was surprised the database is not in the list of the databases! I don't know what's the reason. I searched in the server files but I can't find the database and also in the recycle bin. I put my database in C:\db\myWeb.mdf and suddenly it's been removed! Can anyone tell me how to recover the database?

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >