Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 163/588 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • LinqDataSource wizard table list is not refreshing after updating LinqToSql classes

    - by dotnet_learner
    I have changed my dbml file like this. I have deleted all the tables and stored procs. I added new tables and stored procs from a new database. In the code-behind, I can access the new tables and stored procs. However, in the LinqDataSource using the same dbContext when I'm trying to configure the LinqDataSource. I can see all the old tables in the wizard drop-down. How to refresh the the wizard drop-down so that I can select the newly added tables? Deleting the old LinqDataSourceand adding a new one is not working.

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • MTP Won't Work With Newer Ubuntu

    - by spacesword
    I have a Philips GoGear Vibe 4 GB, set to MTP mode, as it's always been. This works fine with older versions of Ubuntu and works fine with Windows, but doesn't work with newer versions of Ubuntu. The version history is like this: 12.04 - works 12.10 - works 13.04 - doesn't work 14.04 - doesn't work Windows 7 - works When you plug the MP3 player in Ubuntu the file manager opens the root of the device, which contains the folder "internal storage". When you click on "internal storage" to open it, the file manager just hangs. And if you try to unmount the MP3 player, that process hangs too, until you just unplug it. In Windows when you click on internal storage, it opens and shows all the files it contains. And in the earlier versions of Ubuntu it just worked. Any ideas? Thanks.

    Read the article

  • Combine 2 apps into one DB?

    - by coffeeaddict
    I'm debating whether to use the same DB for both my blog and my wiki. Since both are open source, and both install the required tables which is a very small number of tables for both apps, I'm thinking about just using one database to represent both sets of tables. Is this common and safe to do? I am hesitant because I always create a new DB for every application I create or use. But in this case, I don't want to spend another $10 a month from my shared hosting just to get another SQL 2008 DB to host a wiki..it's small and I'm the only one using the wiki. I just want to point the wiki to my existing blog DB that's already running and have the wiki wizard auto gen the tables to that DB and just hold both sets of tables there.

    Read the article

  • Scalable distributed file system for blobs like images and other documents

    - by Pinnacle
    Cassandra & HBase both do not efficiently support storage of blobs like images. Storing directly on HDFS stresses the Namenode because of huge number of files. Facebook uses Haystack for images and attachments storage, but this is not open source. So is Lustre a good choice for distributed blob storage? I have read that Amazon S3 is used by many, but this would cost money and personally, I would not like to rely on third party system. What are other suggestions?

    Read the article

  • How to convert an HTML table to an array in python

    - by user345660
    I have an html document, and I want to pull the tables out of this document and return them as arrays. I'm picturing 2 functions, one that finds all the html tables in a document, and a second one that turns html tables into 2-dimensional arrays. Something like this: htmltables = get_tables(htmldocument) for table in htmltables: array=make_array(table) There's 2 catches: 1. The number tables varies day to day 2. The tables have all kinds of weird extra formatting, like bold and blink tags, randomly thrown in. Thanks!

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • In SQL, we can use "Union" to merge two tables. What are different ways to do "Intersection"?

    - by Jian Lin
    In SQL, there is an operator to "Union" two tables. In an interview, I was told that, say one table has just 1 field with 1, 2, 7, 8 in it, and another table also has just 1 field with 2, and 7 in it, how do I get the intersection. I was stunned at first, because I never saw it that way. Later on, I found that it is actually a "Join" (inner join), which is just select * from t1, t2 where t1.number = t2.number (although the name "join" feels more like "union" rather than "intersect") another solution seems to be select * from t1 INTERSECT select * from t2 but it is not supported in MySQL. Are there different ways to get the intersection besides these two methods?

    Read the article

  • custom DB logging using enterprise library 4.1

    - by Rohit
    We have to create a historical log of all the changed entities. we have defined our custom tables for this purpose. I have to incorporate this tables in Enterprise library logging block and do logging in these tables. I need to write a SP to insert values to these tables. Till now,what i have got from google is that i have to create a listener inheriting from CustomTraceListener and give my implementation of WriteMessage. What i need to know is,how will i plug my tables and SP in Enterprise library logging block.

    Read the article

  • Best performance approach to history mechanism?

    - by Royi Namir
    We are going to create History Mechanism for our changes in DB (DART in pic) via Triggers. we have 600 tables. Each record that will be changed - the trigger will insert the deleted one into XXX. regarding to the XXX : option 1 : clone each table in "Dart" DB and each table now will have a "sister table" e.g. : Table1 will have Table1_History problems : we will have 1200 tables programmer can do mistakes by working on wrong tables... option 2 : make a new DB (DART_2005 in pic) and the history tables will be there option 3 : use linked server which stores the Db which will contain the history tables. question : 1) which option gives the best performance ( I guess 3 is not - but is it 1 or 2 or same ?) 2) Does option 2 is acting like "linked server" ( in queries we will need to select from both DB's...) 3) What is the best practice approach ?

    Read the article

  • Mysql not starting - innodb not found

    - by Rob Guderian
    I have a fresh install of ubuntu 12.04 server edition and mysql server is not starting properly. I did a simple apt-get install apt-get install mysql-server But, it's failing with this error message root@test:~# mysqld 120618 20:57:32 [Warning] The syntax '--log-slow-queries' is deprecated and will be removed in a future release. Please use '--slow-query-log'/'--slow-query-log-file' instead. 120618 20:57:32 [Note] Plugin 'FEDERATED' is disabled. 120618 20:57:32 InnoDB: The InnoDB memory heap is disabled 120618 20:57:32 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120618 20:57:32 InnoDB: Compressed tables use zlib 1.2.3.4 120618 20:57:32 InnoDB: Unrecognized value fdatasync for innodb_flush_method 120618 20:57:32 [ERROR] Plugin 'InnoDB' init function returned error. 120618 20:57:32 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120618 20:57:32 [ERROR] Unknown/unsupported storage engine: InnoDB 120618 20:57:32 [ERROR] Aborting I can start the server with the "--skip-innodb --default-storage-engine=myisam" flags, but would like to use innodb. Does anyone know what the issue here is?

    Read the article

  • How to disable low disk space notification? (Gnome 3, Ubuntu 12.04)

    - by grimripper
    I've got an SSD drive for root and /home, and a larger HDD for storage. The storage disk is almost full, and there's a low disk space warning on startup. The warning doesn't have any "don't show again" option, only "ignore" and "examine". It also doesn't go away but sticks on the screen until the ignore button is clicked, so it's very annoying in addition to being competely unnecessary. I tried unselecting the storage HDD in baobab's settings, but that didn't have any effect. I also tried gconf-editor, and looked for apps - gnome_settings_daemon - plugins - housekeeping, but there is no "plugins" under "gnome_settings_daemon". Only "gtk-modules" and "keybindings". Gnome 3.4.2.1 Ubuntu 12.04.1 LTS 64-bit

    Read the article

  • Writing OLAP SQL query

    - by user1859596
    I have a project I am working on that requires the following : create a normalized sample rdbms (5 tables) using Java I entered 1 million rows of data to each table run two OLTP and two OLAP queries on the normalized tables. Denormalized tables. run the same OLTP and OLAP queries on them and compare time. What does OLAP query mean? I've searched the internet and all that I can find is that I have to make a cube, and apply queries on it. How can I write an OLAP query on a RDBMS? I have a sample : tables normalized(orders,product,customer,branch,sales) sales : order_id,product_id,quantity product : product_id,name,description,price,sales_tax customer : customer_id,f_name,l_name,tel_no,addr,nic,city branch : branch_id,name,tel_no,addr,city orders : order_id,customer_id,order_date,branch_id I want to write an OLAP query on the above tables. I am using Oracle Express with SQL Developer.

    Read the article

  • Finding the width of a directed acyclic graph... with only the ability to find parents

    - by Platinum Azure
    Hi guys, I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list. The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table. That is the ONLY ability I have at this point, without changing the code severely. The code has been in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node. I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed. Thanks! EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.

    Read the article

  • SQLITE (C/C++interface) - How to commit a transaction

    - by AJ
    I am using sqlite c/c++ interface. Now here is my scenario - I have 3 tables (related tables) say A, B, C. Now, there is a function called Set, which get some inputs and based on the inputs inserts rows into these three tables. (sometimes it can be an update in one of the tables) Now I need two things. One, i dont want autocommit feature. Basically I would like to commit after every 1000 calls to Set function Secondly, within the set function itself, if i find that after inserting into two tables, the third insert fails, then i have to revert, those particular changes in that Set function call. Now i dont see any sqlite3_commit function exposed. I only see a function called sqlite3_commit_hook() which is slightly diff in documentation. Are there any function exposed for this purpose? or What is the way to achieve this behaviour? Can you help me with the best approach of doing this. Regards, Arjun

    Read the article

  • Dynamically add rows to listbox

    - by Ivan S
    I have a list box that displays information off of a column of a dataset. I would like the number of rows displayed to be all the rows that are in the dataset (the number of datasets in the rows vary). I'm figuring it has something to do with ListBox.Rows = Dataset.Tables[0].Rows.Count; But it seems to just always default to 4 even when it is only 2. This is what I have in my aspx.cs file. pirateBox.DataTextField = Pirateship.Tables[0].Columns["displayName"].ToString(); pirateBox.DataValueField = pirateship.Tables[0].Columns["PKID"].ToString(); pirateBox.DataSource = pirateship.Tables[0]; pirateBox.DataBind(); pirateBox.Rows = pirateship.Tables[0].Rows.Count; I've been trying a few things and this is what I have so far in .aspx <asp:ListBox ID="pirateBox" runat="server" Rows="1"></asp:ListBox>

    Read the article

  • How to import datarow from one dataset to another?

    - by Disciple
    Hi, I want to copy the row to the new dataset if the previous value in second column isn't the same as the current one (i.e this dataset should have rows with unique values): DataTable tbl = new DataTable(); DataTable tmpTable = ds.Tables[0]; for( var rowIndex = 0; rowIndex < ds.Tables[0].Rows.Count; rowIndex++ ) { object value = null; foreach (DataRow x in tbl.Rows) { if (ds.Tables[0].Rows[rowIndex][1] == x[1]) { value = ds.Tables[0].Rows[rowIndex][1]; break; } } // value already exists if (value == null) { tbl.ImportRow(ds.Tables[0].Rows[rowIndex]); } } How to do this correctly? Maybe one loop instead 2?

    Read the article

  • Joomla 2.5 disable and remove Smart Cache

    - by WooDzu
    I am maintaining a Joomla 2.5 based magazine website with 3-4 new, long articles every day. Smart Search was enabled by default and now I've got a few "finder" tables full of indexed phrases and therms. I wonder if there are any disadvantages if I'd: Disable the Smart Search plugin Remove these 'finder' tables completely Aha, we're using a Search field, which works fine, but I'm not sure what's going to happen if I disable the plugin and remove these tables. Will it then search for phrases in content Joomla tables or simply break w/o missing 'finder' tables Has anyone tried this before?

    Read the article

  • Announcing StorageTek VSM 6

    - by uwes
    On 23rd of October Oracle announced the 6th generation StorageTek Virtual Storage Manager system (StorageTek VSM 6). StorageTek VSM 6 provides customers simple, flexible and mainframe class reliability all while reducing a customer’s total cost of ownership: Simple – Efficiently manages data and storage resources according to customer-defined rules, while streamlining overall tape operations Flexible – Engineered with flexibility in mind, can be deployed to meet each enterprise’s unique business requirements  Reliable – Reduces a customer’s exposure by providing superior data protection, end-to-end high availability architecture and closed loop data integrity checking Low Total Cost of Ownership and Investment Protection – Low asset acquisition cost, high-density data center footprint and physical tape energy efficiency keeps customers storage spending within budget For More Information Go To: Oracle.com Tape PageOracle Technology Network Tape Page

    Read the article

  • How to modify CSS when requirements change?

    - by DaveDev
    Suppose I'm given the requiremnt to geneate a few pages that have tables on them. The original requirement is for all tables to be 500px. I'd write my CSS as follows: table { width: 500px; } That will apply accross the board for all tables. Now, what if they change the requirment so that some tables are 600px. What's the best way to modify the CSS? Should I give the tables classes so table.SizeOne { width: 500px; } table.SizeTwo { width: 600px; } Or is there a better way for me to deal with changes like this?

    Read the article

  • Steps for MySQL DB Replication

    - by Manish Agrawal
    Following are the steps for MySQL Replication implementation on Linux machine: Pre-implementation steps for DB Replication:   1.    Identify the databases to be replicated 2.    Identify the tables to be ignored during replication per database for example log tables 3.  Carefully identify and replace the variables and paths(locations) mentioned (in bold) in the commands given below with appropriate values 4.  Schedule the maintenance activity in odd hours as these activities will affect all the databases on Master database server       Implementation steps for DB Replication:     1.    Configure the /etc/my.cnf file on Master database server to enable Binary logging, setting of server id and configuring of dbnames for which logging should be done. [mysqld] log-bin=mysql-bin server-id=1 binlog-do-db = dbname   Note: You can specify multiple DB in binlog-do-db by using comma separated dbname values like: dbname1, dbname2, …, dbnameN   2.    On Master database, Grant Replication Slave Privileges, by executing following command on mysql prompt mysql> GRANT REPLICATION SLAVE ON *.* TO slaveuser@<hostname> identified by ‘slavepassword’;   3.    Stop the Master & Slave database by giving the command      mysqladmin shutdown   4.    Start the Master database by giving the command      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user&     5.    mysql> FLUSH TABLES WITH READ LOCK; Note: Leave the client (putty session) from which you issued the FLUSH TABLES statement running, so that the read lock remains in effect. If you exit the client, the lock is released. 6.    mysql > SHOW MASTER STATUS;          +---------------+----------+--------------+------------------+          | File          | Position | Binlog_Do_DB | Binlog_Ignore_DB |          +---------------+----------+--------------+------------------+          | mysql-bin.003 | 117       | dbname       |                  |          +---------------+----------+--------------+------------------+ Note: Note this information as this will be required while starting of Slave and replication in later steps   7.    Take MySQL dump by giving the following command, In another session window (putty window) run the following command: mysqldump –u user --ignore-table=dbname.tbl_name -–ignore-table=dbname.tbl_name2 --master-data dbname > dbname_dump.db Note: When choosing databases to include in the dump, remember that you will need to filter out databases on each slave that you do not want to include in the replication process.     8.    Unlock the tables on Master by giving following command: mysql> UNLOCK TABLES;   9.    Copy the dump file to Slave DB server   10.  Startup the Slave by using option --skip-slave      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --skip-slave&   11.  Restore the dump file on Slave DB server      mysql –u user dbname < dbname_dump.db   12.  Stop the Slave database by giving the command      mysqladmin shutdown   13.  Configure the /etc/my.cnf file on the Slave database server [mysqld] server-id=2 replicate-ignore-table = dbname.tablename   14.  Start the Slave Mysql Server with 'replicate-do-db=DB name' option.      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --replicate-do-db=dbname --skip-slave   15.  Configure the settings at Slave server for Master host name, log filename and position within the log file as shown in Step 6 above Use Change Master statement in the MySQL session mysql> CHANGE MASTER TO MASTER_HOST='<master_host_name>', MASTER_USER='<replication_user_name>', MASTER_PASSWORD='<replication_password>', MASTER_LOG_FILE='<recorded_log_file_name>', MASTER_LOG_POS=<recorded_log_position>;   16.  On Slave Servers mysql prompt give the following command: a.     mysql > START SLAVE; b.    mysql > SHOW SLAVE STATUS;         Note: To stop slave for backup or any other activity you can use the following command on the Slave Servers mysql prompt: mysql> STOP SLAVE     Refer following links for more information on MySQL DB Replication: http://dev.mysql.com/doc/refman/5.0/en/replication-options.html http://crazytoon.com/2008/04/21/mysql-replication-replicate-by-choice/ http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >