Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 288/3184 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Geek City: Where are LOBs stored?

    - by Kalen Delaney
    When researching a question from one of the students in my class last week, I was reading the documentation for CREATE TABLE about storing LOB columns at http://msdn.microsoft.com/en-us/library/ms174979.aspx . For this discussion LOB columns includes text, image, ntext, xml and the MAX columns when they are over 8000 bytes and stored outside the regular data row. I knew that SQL Server gives us the capability of storing LOB columns in a separate filegroup with the TEXTIMAGE_ON clause, but I was surprised...(read more)

    Read the article

  • #DAX Query Plan in SQL Server 2012 #Tabular

    - by Marco Russo (SQLBI)
    The SQL Server Profiler provides you many information regarding the internal behavior of DAX queries sent to a BISM Tabular model. Similar to MDX, also in DAX there is a Formula Engine (FE) and a Storage Engine (SE). The SE is usually handled by Vertipaq (unless you are using DirectQuery mode) and Vertipaq SE Query classes of events gives you a SQL-like syntax that represents the query sent to the storage engine. Another interesting class of events is the DAX Query Plan , which contains a couple...(read more)

    Read the article

  • Introducing Microsoft SQL Server 2008 R2

    - by CatherineRussell
    I was thrilled to find a free ebook: Introducing Microsoft SQL Server 2008 R2,by Ross Mistry and Stacia Misner The purpose in Introducing Microsoft SQL Server 2008 R2 is to point out both the new and the improved in the latest version of SQL Server. Great info and they are sharing it for free. Feel free to follow the authors and ask quesitons on Twitter. @RossMistry @StaciaMisner   Here is the link to the book: http://blogs.msdn.com/b/microsoft_press/archive/2010/04/14/free-ebook-introducing-microsoft-sql-server-2008-r2.aspx

    Read the article

  • APress Deal of the Day 2/June/2014 - Pro SQL Server 2012 Integration Services

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/02/apress-deal-of-the-day-2june2014---pro-sql-server.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430236924 is Pro SQL Server 2012 Integration Services. “Pro SQL Server 2012 Integration Services is your key to building powerful extract, transform, load (ETL) solutions using SQL Server 2012 Integration Services (SSIS).”

    Read the article

  • Read SQL Server Reporting Services Overview

    - by Editor
    Read an excellent, 14-page, general overview of Microsoft SQL Server 2008 Reporting Services entitled White Paper: Reporting Services in SQL Server 2008. Download the White Paper. (360 KB Microsoft Word file) White Paper: Reporting Services in SQL Server 2008 Microsoft SQL Server 2008 Reporting Services provides a complete server-based platform that is designed to support a wide variety [...]

    Read the article

  • APress Deal of the day 29/Jun/2013 - Pro SQL Database for Windows Azure

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/29/apress-deal-of-the-day-29jun2013---pro-sql-database.aspxToday's $10 Deal of the day from APress at http://www.apress.com/9781430243953 is Pro SQL Database for Windows Azure"Pro SQL Database for Windows Azure, 2nd Edition introduces you to Microsoft's cloud-based delivery of its enterprise-caliber, SQL Server database management system—showing you how to program and administer it in a variety of cloud computing scenarios."

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Le "In-Memory" débarque dans SQL Server, Microsoft muscle Excel pour la BI et annonce le SP1 de SQL Server 2012 au Pass Summit 2012

    Pass Summit 2012 : Microsoft dote SQL Server d'une base de données In-Memory et annonce la disponibilité du SP 1 de SQL Server 2012 et SQL Server 2012 PDW Microsoft a profité de son salon Pass Summit 2012 dédié à SQL Server pour dévoiler sa vision d'une plateforme de gestion de données moderne. L'éditeur à travers plusieurs annonces a renouvelé son engagement d'aider ses clients à mieux exploiter leurs données, qu'elles soient structurées ou non structurées, où qu'elles se trouvent et quelle que soit leur taille. L'annonce phare qui a marqué l'ouverture du Pass Summit 2012 a été la présentation d'une nouvelle technologie de base de données transactionnelles in-Memory

    Read the article

  • Practical PowerShell for SQL Server Developers and DBAs – Part 1

    There is a lot of confusion amongst DBAs about using PowerShell due to existence the deprecated SQLPS mini-shell of SSMS and the newer SQLPS module. In a two-part article and wallchart, Michael Sorens explains how to install it, what it is, and some of the excellent things it has to offer. Compress live data by 73% Red Gate's SQL Storage Compress reduces the size of live SQL Server databases, saving you disk space and storage costs. Learn more.

    Read the article

  • Cleaning Up SQL Server Deployment Scripts

    Although, generally speaking, source control is the truth, a database doesn't quite conform to the ideal because the target schema can, for valid reasons, contain other conflicting truths that can't easily be captured in source control. Dave Ballantyne explains the problems and suggests a solution. Compress live data by 73% Red Gate's SQL Storage Compress reduces the size of live SQL Server databases, saving you disk space and storage costs. Learn more.

    Read the article

  • How To Support Multiple SQL Server Package Configurations in SSIS

    I have several applications that use SSIS packages and I want to be able to store all the configurations together in a single table when I deploy. When a package executes I need a way of specifying the "application" and having SSIS automatically handle the package configuration based on the application. Compress live data by 73% Red Gate's SQL Storage Compress reduces the size of live SQL Server databases, saving you disk space and storage costs. Learn more.

    Read the article

  • Handling Constraint Violations and Errors in SQL Server

    The database developer can, of course, throw all errors back to the application developer to deal with, but this is neither kind nor necessary. How errors are dealt with is dependent on the application, but the process itself isn't entirely obvious. Compress live data by 73% Red Gate's SQL Storage Compress reduces the size of live SQL Server databases, saving you disk space and storage costs. Learn more.

    Read the article

  • APress Deal of the Day 20/Dec/2010 - Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008

    - by TATWORTH
    Todays $10 bargain PDF from Apress is: Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008 Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. $49.99 | Published Jul 2010 |

    Read the article

  • Configure SQL Express 2005 for remote access

    Please follow the below steps as shown in pictures to configure SQL Server Express 2005 for remote access. Fig1: Open SQL Serve Configuration Manager Fig2: Navigate to SQL Serve 2005 N/W configuration and click on Protocols node Fig3: Enable TCP/IP Protocol Fig4: Enable Named Pipes Protocol Fig5: After enabling TCP/IP and Named Pipes protocols Fig6: Finally click on TCP/IP to configure the port number to listen N/W requests to SQL Express 2005. span.fullpost {display:none;}

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • Serve up PC hard drive as USB mass storage

    - by sheepsimulator
    Is there a software package available that can serve up a hard-drive internal to a PC and make it available over USB to other USB Master nodes as mass storage? Ex: take your C: or /dev/hda drive on a PC (let's call the computer PC-A), and run a driver program which makes your C: or /dev/hda drive available to external devices as USB mass storage. When you'd hook up another PC (PC-B) to PC-A via USB, it would detect a USB mass storage device, which is C: or /dev/hda on PC-A. Is this even possible? EDIT: I know that there are other ways of making data on a drive available between two different computers (eg. putting PC-A's hdd in a USB-drive-enclosure, or having PC-A make the hdd available via a network share). But I'd like to know if the method that I describe above is even technically possible.

    Read the article

  • Is there any chance that my data will get silently corrupted with a robocopy SMB network transfer?

    - by Archagon
    I'm setting up a NAS box for the first time. At the moment, I have most of my data backed up to a few local hard drives, and I intend to transfer all the data to my NAS over ethernet once the RAID array is setup. Since this is all happening over the network, I'm a bit worried about my data getting corrupted silently during transfer. From what I understand, data generally doesn't get corrupted without notice on local transfers because a checksum is performed at some point by the drive or the OS. (This could be totally wrong.) Does the same thing happen with SMB, or is it up to the transferrer to check the integrity of their data? And if it doesn't happen with SMB, is there a protocol that does ensure data integrity? I know that rsync can checksum a transfer, but I'm on Windows and I already have a robocopy configuration that I like. Will my data be safe or do I have to use an external checksum tool to make sure?

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • (SQL) Selecting from a database based on multiple pairs of pairs

    - by Owen Allen
    The problem i've encountered is attempting to select rows from a database where 2 columns in that row align to specific pairs of data. IE selecting rows from data where id = 1 AND type = 'news'. Obviously, if it was 1 simple pair it would be easy, but the issue is we are selecting rows based on 100s of pair of data. I feel as if there must be some way to do this query without looping through the pairs and querying each individually. I'm hoping some SQL stackers can provide guidance. Here's a full code break down: Lets imagine that I have the following dataset where history_id is the primary key. I simplified the structure a bit regarding the dates for ease of reading. table: history history_id id type user_id date 1 1 news 1 5/1 2 1 news 1 5/1 3 1 photo 1 5/2 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 If the user wants to select rows from the database based on a date range we would take a subset of that data. SELECT history_id, id, type, user_id, date FROM history WHERE date BETWEEN '5/3' AND '5/5' Which returns the following dataset history_id id type user_id date 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 Now, using that subset of data I need to determine how many of those entries represent the first entry in the database for each type,id pairing. IE is row 4 the first time in the database that id: 3, type: news appears. So I use a with() min() query. In real code the two lists are programmatically generated from the result sets of our previous query, here I spelled them out for ease of reading. WITH previous AS ( SELECT history_id, id, type FROM history WHERE id IN (1,2,3,4) AND type IN ('news','photo') ) SELECT min(history_id) as history_id, id, type FROM previous GROUP BY id, type Which returns the following data set. history_id id type user_id date 1 1 news 1 5/1 2 1 news 1 5/1 3 1 photo 1 5/2 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 You'll notice it's the entire original dataset, because we are matching id and type individually in lists, rather than as a collective pairs. The result I desire is, but I can't figure out the SQL to get this result. history_id id type user_id date 1 1 news 1 5/1 4 3 news 1 5/3 5 4 news 1 5/3 7 2 photo 1 5/4 Obviously, I could go the route of looping through each pair and querying the database to determine it's first result, but that seems an inefficient solution. I figured one of the SQL gurus on this site might be able to spread some wisdom. In case I'm approaching this situation incorrectly, the gist of the whole routine is that the database stores all creations and edits in the same table. I need to track each users behavior and determine how many entries in the history table are edits or creations over a specific date range. Therefore I select all type:id pairs from the date range based on a user_id, and then for each pairing I determine if the user is responsible for the first that occurs in the database. If first, then creation else edit. Any assistance would be awesome.

    Read the article

  • Tile Engine - Procedural generation, Data structures, Rendering methods - A lot of effort question!

    - by Trixmix
    Isometric Tile and GameObject rendering. To achive the desired looking game I need to take into consideration which tiles need to be drawn first and which last. What I used is a Object that is TileRenderQueue that you would give it a tile list and it will give you a queue on which ones to draw based on their Z coordinate, so that if the Z is higher then it needs to be drawn last. Now if you read above you would know that I want the location data to instead of being stored in the tile instance i want it to be that the index in the array is the location. and then maybe based on the array i could draw the tiles instead of taking a long time in for looping and ordering them by Z. This is the hardest part for me. It's hard for me to find a simple solution to the which one to draw when problem. Also there is the fact that if the X is larger than the gameobject where the X is larger needs to be drawn over the rest of the tiles and so on. Here is an example: All the parts work together to create an efficient engine so its important to me that you would answer all of the parts. I hope you will work on the answers hard just as much that I worked on this question! If there is any unclear part tell me so in the comments! Thanks for the help!

    Read the article

  • NEW 2-Day Instructor Led Course on Oracle Data Mining Now Available!

    - by chberger
    A NEW 2-Day Instructor Led Course on Oracle Data Mining has been developed for customers and anyone wanting to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database.  Course Objectives: Explain basic data mining concepts and describe the benefits of predictive analysis Understand primary data mining tasks, and describe the key steps of a data mining process Use the Oracle Data Miner to build,evaluate, and apply multiple data mining models Use Oracle Data Mining's predictions and insights to address many kinds of business problems, including: Predict individual behavior, Predict values, Find co-occurring events Learn how to deploy data mining results for real-time access by end-users Five reasons why you should attend this 2 day Oracle Data Mining Oracle University course. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, you will learn to gain insight and foresight to: Go beyond simple BI and dashboards about the past. This course will teach you about "data mining" and "predictive analytics", analytical techniques that can provide huge competitive advantage Take advantage of your data and investment in Oracle technology Leverage all the data in your data warehouse, customer data, service data, sales data, customer comments and other unstructured data, point of sale (POS) data, to build and deploy predictive models throughout the enterprise. Learn how to explore and understand your data and find patterns and relationships that were previously hidden Focus on solving strategic challenges to the business, for example, targeting "best customers" with the right offer, identifying product bundles, detecting anomalies and potential fraud, finding natural customer segments and gaining customer insight.

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • How do I express subtle relationships in my data?

    - by Chuck H
    "A" is related to "B" and "C". How do I show that "B" and "C" might, by this context, be related as well? Example: Here are a few headlines about a recent Broadway play: 1 - David Mamet's Glengarry Glen Ross, Starring Al Pacino, Opens on Broadway 2 - Al Pacino in 'Glengarry Glen Ross': What did the critics think? 3 - Al Pacino earns lackluster reviews for Broadway turn 4 - Theater Review: Glengarry Glen Ross Is Selling Its Stars Hard 5 - Glengarry Glen Ross; Hey, Who Killed the Klieg Lights? Problem: Running a fuzzy-string match over these records will establish some relationships, but not others, even though a human reader could pick them out from context in much larger datasets. How do I find the relationship that suggests #3 is related to #4? Both of them can be easily connected to #1, but not to each other. Is there a (Googlable) name for this kind of data or structure? What kind of algorithm am I looking for? Goal: Given 1,000 headlines, a system that automatically suggests that these 5 items are all probably about the same thing. To be honest, it's been so long since I've programmed I'm at a loss how to properly articulate this problem. (I don't know what I don't know, if that makes sense). This is a personal project and I'm writing it in Python. Thanks in advance for any help, advice, and pointers!

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >