Search Results

Search found 22721 results on 909 pages for 'block level storage'.

Page 118/909 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • How do I maximize code coverage?

    - by naivedeveloper
    Hey all, the following is a snippet of code taken from the unix ptx utility. I'm attempting to maximize code coverage on this utility, but I am unable to reach the indicated portion of code. Admittedly, I'm not as strong in my C skills as I used to be. The portion of code is indicated with comments, but it is towards the bottom of the block. if (used_length == allocated_length) { allocated_length += (1 << SWALLOW_REALLOC_LOG); block->start = (char *) xrealloc (block->start, allocated_length); } Any help interpreting the indicated portion in order to cover that block would be greatly appreciated. /* Reallocation step when swallowing non regular files. The value is not the actual reallocation step, but its base two logarithm. */ #define SWALLOW_REALLOC_LOG 12 static void swallow_file_in_memory (const char *file_name, BLOCK *block) { int file_handle; /* file descriptor number */ struct stat stat_block; /* stat block for file */ size_t allocated_length; /* allocated length of memory buffer */ size_t used_length; /* used length in memory buffer */ int read_length; /* number of character gotten on last read */ /* As special cases, a file name which is NULL or "-" indicates standard input, which is already opened. In all other cases, open the file from its name. */ bool using_stdin = !file_name || !*file_name || strcmp (file_name, "-") == 0; if (using_stdin) file_handle = STDIN_FILENO; else if ((file_handle = open (file_name, O_RDONLY)) < 0) error (EXIT_FAILURE, errno, "%s", file_name); /* If the file is a plain, regular file, allocate the memory buffer all at once and swallow the file in one blow. In other cases, read the file repeatedly in smaller chunks until we have it all, reallocating memory once in a while, as we go. */ if (fstat (file_handle, &stat_block) < 0) error (EXIT_FAILURE, errno, "%s", file_name); if (S_ISREG (stat_block.st_mode)) { size_t in_memory_size; block->start = (char *) xmalloc ((size_t) stat_block.st_size); if ((in_memory_size = read (file_handle, block->start, (size_t) stat_block.st_size)) != stat_block.st_size) { error (EXIT_FAILURE, errno, "%s", file_name); } block->end = block->start + in_memory_size; } else { block->start = (char *) xmalloc ((size_t) 1 << SWALLOW_REALLOC_LOG); used_length = 0; allocated_length = (1 << SWALLOW_REALLOC_LOG); while (read_length = read (file_handle, block->start + used_length, allocated_length - used_length), read_length > 0) { used_length += read_length; /* Cannot cover from this point...*/ if (used_length == allocated_length) { allocated_length += (1 << SWALLOW_REALLOC_LOG); block->start = (char *) xrealloc (block->start, allocated_length); } /* ...to this point. */ } if (read_length < 0) error (EXIT_FAILURE, errno, "%s", file_name); block->end = block->start + used_length; } /* Close the file, but only if it was not the standard input. */ if (! using_stdin && close (file_handle) != 0) error (EXIT_FAILURE, errno, "%s", file_name); }

    Read the article

  • SQL SERVER – ERROR – FIX – Msg 3702, Level 16, State 3, Line 1 Cannot drop database “MyDBName” because it is currently in use

    - by pinaldave
    I often go to do various seminars and presentations at various organizations. During presentations I often create and drop various databases for demonstrations purpose. Recently in one of the presentations, I tried to remove my recently created database, I got following error. Msg 3702, Level 16, State 3, Line 1 Cannot drop database “MyDBName” because it is currently in use. The reason was very simple as my database was in use by another session or window. I had option that I should go and find open session and close it right away; later followed by dropping the database. As I was in rush I quickly wrote down following code and I was able to successfully drop the database. USE MASTER GO ALTER DATABASE MyDBName SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE MyDBName GO Please note that I am doing all this on my demonstrations, do not run above code on production without proper approvals and supervisions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Entity system in Lua, communication with C++ and level editor. Need advice.

    - by Notbad
    Hi!, I know this is a really difficult subject. I have been reading a lot this days about Entity systems, etc... And now I'm ready to ask some questions (if you don't mind answering them) because I'm really messed. First of all, I have a 2D basic editor written in Qt, and I'm in the process of adding entitiy edition. I want the editor to be able to receive RTTI information from entities to change properties, create some logic being able to link published events to published actions (Ex:A level activate event throws a door open action), etc... Because all of this I guess my entity system should be written in scripting, in my case Lua. In the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes en C++? If I do this en C++ won't I loose all the RTTI information I want for my editor?. In the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2) Where should I place the messa system for my game engine? Lua? C++?. I'm tempted to just have C++ object to behave as servers, offering services to lua business logic. Things like physics system, rendering system, input system, World class, etc... And for all the other things, lua. Creation/Composition of entities based on components, game logic, etc... Could anyone give any insight on how to accomplish this? And what aproach is better?. Thanks in advance, HexDump.

    Read the article

  • What are some good courses to take my programming to the next level?

    - by absentx
    I am in search of either some in person, or online training that could take my coding to the next level. I am looking to attack two specific areas: Javascript: While I have been getting by with javascript for three or four years, I still feel like it takes a back seat to my other programming. I use Jquery a lot but would prefer to be proficient in pure JS also. PHP: I feel pretty proficient at PHP but I know there is only room to improve. Here I am interested in something that can teach me the more advanced aspects of the language, improve my code writing and perhaps cover object oriented php in depth also. I have looked into Netcom's training courses before but I can't tell if there advanced webmaster professional would be a good fit or not. Seems kind of like a force fed course but I am interested in it because I am looking for something in the one to two week range that is targeted at what I am looking for. I have zero experience with any type of online courses in terms of programming. It appears lots are available, but I am not sure on the quality.

    Read the article

  • What level of detail to use in an interface members descriptions?

    - by famousgarkin
    I am extracting interfaces from some classes in .NET, and I am not completely sure about what level of detail of description to use for some of the interface members (properties, methods). An example: interface ISomeInterface { /// <summary> /// Checks if the object is checked out. /// </summary> /// <returns> /// Returns true if the object is checked out, or if the object locking is not enabled, /// otherwise returns false. /// </returns> bool IsObjectCheckedOut(); } class SomeImplementation : ISomeInterface { public bool IsObjectCheckedOut() { // An implementation of the method that returns true if the object is checked out, // or if the object locking is not enabled } } The part in question is the <returns>...</returns> section of the IsObjectCheckedOut description in the interface. Is it ok to include such a detail about return value in the interface itself, as the code that will work with the interface should know exactly what that method will do? All the current implementations of the method will do just that. But is it ok to limit the possible other/future implementations by description this way? Or should this not be included in the interface description, as there is no way to actually ensure that other/future implementations will do exactly this? Is it better to be as general as possible regarding the interface in such circumstances? I am currently inclined to the latter option.

    Read the article

  • Ubuntu 12.04 - syslog showing "SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled"

    - by Tom G
    I have been seeing these random logs in syslog on our production system. There is no XFS setup. Fstab only shows local partitions, only EXT3 . There is nothing in crontabs either. The only file system related package I have installed is 'nfs-kernel-server' Kernel version is 3.2.0-31-generic . kernel: [601730.795990] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled kernel: [601730.798710] SGI XFS Quota Management subsystem kernel: [601730.828493] JFS: nTxBlock = 8192, nTxLock = 65536 kernel: [601730.897024] NTFS driver 2.1.30 [Flags: R/O MODULE]. kernel: [601730.964412] QNX4 filesystem 0.2.3 registered. kernel: [601731.035679] Btrfs loaded os-prober: debug: running /usr/lib/os-probes/mounted/10freedos on mounted /dev/vda1 10freedos: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/10qnx on mounted /dev/vda1 10qnx: debug: /dev/vda1 is not a QNX4 partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20macosx on mounted /dev/vda1 macosx-prober: debug: /dev/vda1 is not an HFS+ partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20microsoft on mounted /dev/vda1 20microsoft: debug: /dev/vda1 is not a MS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/vda1 30utility: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/40lsb on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/70hurd on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/80minix on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/83haiku on mounted /dev/vda1 83haiku: debug: /dev/vda1 is not a BeFS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/90bsd-distro on mounted /dev/vda1 83haikuos-prober: debug: running /usr/lib/os-probes/mounted/90linux-distro on mounted /dev/vda1 os-prober: debug: running /usr/lib/os-probes/mounted/90solaris on mounted /dev/vda1 os-prober: debug: /dev/vda2: is active swap Why would this randomly show up? This also spawns multiple "jfsCommit" processes.

    Read the article

  • High-level description of how experimental C++ features are developed?

    - by Praxeolitic
    Herb Sutter in a video answers a question about the concepts proposal considered for C++11 and from his remarks it sounds like multiple groups offered prototype implementations but all of them left concerns about slow compile times. The comment surprised me because it suggests that, at least in some cases, the prototypes being developed are not just proofs of concept -- they're even expected to perform. All the work that must take has me curious. For mature languages, especially C++, how are experimental language features developed? Is it much different from developing a compiler that implements a standard? Does a developer have a sense of if it will work and perform or even if it ever could? What are the most time consuming parts and are any parts surprisingly easier than one might expect? The question is not what does the C++ standards committee do, but rather the part that comes before. When an experimental implementation for a proposal is being put together and there aren't any completely solidified rules, how is the sausage made? I'm not a professional compiler developer nor do I expect answers with step by step accounts. I'd like a high-level idea of how this would be done or if there are any general patterns at all. I don't know what to expect from the answers but even if there are no rules to the process and the small number of people who do this just cowboy it and then, for stuff that worked out, write up the "official version" as a proposal, that answer would still be informative.

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Is This a Valid Way to Use Blocks in Objective-C?

    - by Carter
    I've been building a HTTP client that uses web services to synchronize information between the client and server. I've been using Blocks and NSURLConnection to achieve this on the client side, but I'm getting frequent EXC_BAD_ACCESS crashes in objc_msgSend(). From what I understand, this usually means that a stored block that has fallen off the stack has been called. I think I've coded things correctly to avoid this, but I'm still stuck. Here is conceptually what my code is doing. It starts by calling "synchronizeWithWebServer". That method invokes "listRootObjectsOnServerWithBlock:" which takes in a block to be called when the method returns. "listRootObjectsOnServersWithBlock:" initiates a NSURLConnection to the web server asynchronously. It to expects a block to be called when it returns. Inside that block I want to be able to execute the original Block (so aptly named 'block'). This is only a simplified version of my code. The real synchronization process is more complex but it's mostly more of the same as what you see below. Sometimes the code works perfectly, but about 80% of the time it crashes very early on in the routine. It seems to be more vulnerable to crashing when my data set gets larger. - (void)synchronizeWithWebServer { [self listRootObjectsOnServerWithBlock:^(NSArray *results, NSError *error) { //Iterate over result objects and perform some other similar routines. }]; } - (void)listRootObjectsOnServerWithBlock:(void (^)(NSArray *results, NSError *error))block { //Create NSURLRequest Here //Create connection asynchronously. block = [block copy]; [NSURLConnection sendAsynchronousRequest:urlRequest queue:[NSOperationQueue currentQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){ //Parse response from web server (stored in NSData *data) NSArray *results = ..... //Call 'block' block(results, error); [block release]; }]; }

    Read the article

  • How to create a column containing a string of stars to inidcate levels of a factor in a data frame i

    - by PaulHurleyuk
    (second question today - must be a bad day) I have a dataframe with various columns, inculding a concentration column (numeric), a flag highlighting invalid results (boolean) and a description of the problem (character) dput(df) structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), rawconc = c(77.4, 52.6, 86.5, 44.5, 167, 16.2, 59.3, 123, 1.95, 181), reason = structure(c(NA, NA, 2L, NA, NA, NA, 2L, 1L, NA, NA), .Label = c("Fails Acceptance Criteria", "Poor Injection"), class = "factor"), flag = c("False", "False", "True", "False", "False", "False", "True", "True", "False", "False" )), .Names = c("x", "rawconc", "reason", "flag"), row.names = c(NA, -10L), class = "data.frame") I can create a column with the numeric level of the reason column df$level<-as.numeric(df$reason) df x rawconc reason flag level 1 1 77.40 <NA> False NA 2 2 52.60 <NA> False NA 3 3 86.50 Poor Injection True 2 4 4 44.50 <NA> False NA 5 5 167.00 <NA> False NA 6 6 16.20 <NA> False NA 7 7 59.30 Poor Injection True 2 8 8 123.00 Fails Acceptance Criteria True 1 9 9 1.95 <NA> False NA 10 10 181.00 <NA> False NA and here's what I want to do to create a column with 'level' many stars, but it fails df$stars<-paste(rep("*",df$level)sep="",collapse="") Error: unexpected symbol in "df$stars<-paste(rep("*",df$level)sep" df$stars<-paste(rep("*",df$level),sep="",collapse="") Error in rep("*", df$level) : invalid 'times' argument rep("*",df$level) Error in rep("*", df$level) : invalid 'times' argument df$stars<-paste(rep("*",pmax(df$level,0,na.rm=TRUE)),sep="",collapse="") Error in rep("*", pmax(df$level, 0, na.rm = TRUE)) : invalid 'times' argument It seems that rep needs to be fed one value at a time. I feel that this should be possible (and my gut says 'use lapply' but my apply fu is v. poor) ANy one want to try ?

    Read the article

  • Wine 1.6.2-Trying to switch to 32-bit Wineprefix from 64-bit Wine (Trusty 14.04). Can anyone help me out?

    - by AlternateSteve90
    Hello fellow Ubuntu users, I'm having a little trouble with Wine 1.6.1 and I was wondering if someone could help me out. I recently downloaded some 32-bit games that I'd wanted to try(BeamNG Drive and Bugbear's Next Car Game demo) and I had run into some trouble trying to get either of these games to run. So I came across a couple pieces of advice on the 'Net, one here on the Ubuntu community site and the other at BeamNG's forums, on how to create a 32-bit wineprefix on a 64-bit setup. I managed to be able to create the wine32 folder, but now I'm having trouble making it my default Wine setup. Anybody have any idea how I can do that? I'll post the URLs for said advice, btw: http://www.beamng.com/threads/1788-Installing-DRIVE-under-Linux-via-Wine How do I create a 32-bit WINE prefix? Here's what I've tried so far in the Terminal: steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' wine: created the configuration directory '/home/steven/wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10ee890, overlapped 0x10ee89c): stub wine: configuration in '/home/steven/wine32' has been updated. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=$HOME/.wine32 wine dxsetup.exe wine: created the configuration directory '/home/steven/.wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x103e2b8, overlapped 0x103e2d0): stub fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10fe890, overlapped 0x10fe89c): stub wine: configuration in '/home/steven/.wine32' has been updated. wine: cannot find L"C:\windows\system32\dxsetup.exe" steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win64 winecfgsteven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win32 winecfg wine: WINEARCH set to win32 but '/home/steven/.wine' is a 64-bit installation. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH=win32 wine wineboot steven@steven-HP-Pavilion-17-Notebook-PC:~$ So, yeah. TBH, though, I'm far from an expert and perhaps I've been going about it all the wrong way. In the meantime, I'll try to keep looking for solutions on my own, but if anybody can help me solve this dilemma, especially if anyone happens to own any of these two games in particular, I'd appreciate it. :)

    Read the article

  • They may block off Howard Street—but Oracle OpenWorld is a two-way street.

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 by Jim Lein, Sr. Director, Oracle Accelerate for Midsize Companies “Engineered to Inform and Inspire”—that’s the theme of Oracle OpenWorld 2012. In early October, tens of thousands of attendees will descend on the streets of San Francisco because they share one thing in common: the desire to learn more about Oracle. You might think that’s the way we, Oracle employees, look at this event—as just another opportunity for attendees to learn about what we do. But it’s really a two way street. Every year I’m amazed by how informed and inspired I am by our customers and their companies. Midsize companies buy Oracle to grow. As part of the Oracle Accelerate for Midsize Companies team I get to talk with our partners and business leaders at growing companies almost every day, usually via phone. Oracle OpenWorld presents the perfect opportunity to meet some of them in person, in an informal setting, and in one of the most beautiful cities in the world. The stories our customers tell me about their businesses provide vivid examples of how they have overcome the challenges of managing increasingly complex global operations and growing during uncertain economic conditions. It’s no secret that my favorite session at Oracle OpenWorld (besides Larry Ellison’s keynotes and the Customer Appreciation Event, of course) is the Oracle Accelerate Customer Panel. This year we’re featuring executives from three companies who deployed Oracle ERP rapidly to support their company’s growth: Chris Powell, VP and Corporate Controller of Beats by Dr. Dre, a California based designer and manufacturer of premium headphones (sorry, no free samples), Iñaki Zuazo, CIO of Industrias Juno, a building materials provider based in Spain, Kamran Moosa, Project Coordinator for Spartan Engineering, a provider of engineering and construction support services for an LPG storage project in Texas, and That’s a pretty diverse lineup and it will be interesting to hear the perspectives of both IT and financial project stakeholders. The session, “Oracle Accelerate Customer Case Studies: Rapid Deployment of Oracle Applications”, is at 3:30 pm on Wednesday, October 3, in the Concert room at the Palace Hotel. Oracle loves our hometown of San Francisco and it’s a great place to host Oracle OpenWorld. It’s now San Francisco’s largest conference and the city closes off Howard Street to better accommodate the attendees. Some Bay Area commuters may be inconvenienced for a few days by this closure but the conference brings about $100 million into the local economy. Now that’s a two-way street. More Oracle Accelerate at Oracle OpenWorld “Faster, Better, Cheaper Application Deployment with Oracle Business Accelerators”, Monday, October 1st, 10:45 a.m., Moscone West Room 3016 “Oracle Accelerate and Oracle Business Accelerators for Midsize Companies”, (partners only), Wednesday, October 3, 10:15 a.m., Marriott – Golden Gate B Visit the Oracle Accelerate and Oracle Business Accelerator Kiosk in the Moscone West Exhibit Grounds Download the Focus On Oracle Accelerate for Midsize Companies Focus document /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • NEC Corporation uPD720200 USB 3.0 controller doesn't run at full speed

    - by Radek Zyskowski
    I have fresh install of Ubuntu 10.10. I have external HD on USB 3.0. Trying to connect this via PCI Express NEC controller. dmesg: [ 8966.820078] usb 6-3: new high speed USB device using xhci_hcd and address 0 [ 8966.839831] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.840580] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.841329] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.842079] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.843343] scsi8 : usb-storage 6-3:1.0 [ 8967.847144] scsi 8:0:0:0: Direct-Access SAMSUNG HD204UI 1AQ1 PQ: 0 ANSI: 5 [ 8967.847589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 8967.847923] sd 8:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 8967.848341] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.850959] sd 8:0:0:0: [sdb] Write Protect is off [ 8967.850963] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 8967.850966] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.851818] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.852365] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.852370] sdb: sdb1 [ 8967.871315] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.871853] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.871856] sd 8:0:0:0: [sdb] Attached SCSI disk [ 8967.950728] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.951355] sd 8:0:0:0: [sdb] Sense Key : Recovered Error [current] [descriptor] [ 8967.951361] Descriptor sense data with sense descriptors (in hex): [ 8967.951363] 72 01 04 1d 00 00 00 0e 09 0c 00 00 00 00 00 00 [ 8967.951375] 00 00 00 00 00 50 [ 8967.951380] sd 8:0:0:0: [sdb] ASC=0x4 ASCQ=0x1d [ 8968.790076] xhci_hcd 0000:02:00.0: HC died; cleaning up [ 8968.790076] usb 6-3: USB disconnect, address 2 [ 8999.008554] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008558] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK [ 8999.008562] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 39 00 00 3e 00 [ 8999.008573] end_request: I/O error, dev sdb, sector 1953535801 [ 8999.008578] Buffer I/O error on device sdb1, logical block 1953535738 [ 8999.008582] Buffer I/O error on device sdb1, logical block 1953535739 [ 8999.008585] Buffer I/O error on device sdb1, logical block 1953535740 [ 8999.008589] Buffer I/O error on device sdb1, logical block 1953535741 [ 8999.008592] Buffer I/O error on device sdb1, logical block 1953535742 [ 8999.008595] Buffer I/O error on device sdb1, logical block 1953535743 [ 8999.008600] Buffer I/O error on device sdb1, logical block 1953535744 [ 8999.008603] Buffer I/O error on device sdb1, logical block 1953535745 [ 8999.008606] Buffer I/O error on device sdb1, logical block 1953535746 [ 8999.008609] Buffer I/O error on device sdb1, logical block 1953535747 [ 8999.008642] scsi 8:0:0:0: rejecting I/O to offline device [ 8999.008747] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008749] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 8999.008752] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 77 00 00 3e 00 [ 8999.008760] end_request: I/O error, dev sdb, sector 1953535863 sudo lspci -v 2:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Physical Slot: 32 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fe9fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd If I try to put into this controller any USB 2.0, it works fine. But USB 3.0 nope. Any idea?

    Read the article

  • UEC - Can the Cluster Controller and Storage Controller be seperate systems?

    - by Jeremy Hajek
    My department is implementing an Ubuntu Enterprise Cloud. I have done the testing and am quite comfortable with the 4 pieces, CC/SC, CLC, WS, NC. Looking at various documents below it appears the the Storage Controller and Cluster Controller (eucalyptus-sc and eucalyptus-cc) are always installed on the same system. My question is this: can I install the storage controller and the cluster controller on separate systems? http://open.eucalyptus.com/wiki/EucalyptusAdvanced_v2.0 the picture indicates that cc and sc are two different machines http://www.canonical.com/sites/default/files/active/Whitepaper-UbuntuEnterpriseCloudArchitecture-v1.pdf P.10 1st paragraph uses the word "machine(s)" http://software.intel.com/file/31966 P. 8 indicates the same separate architecture BUT... https://help.ubuntu.com/community/UEC/PackageInstallSeparate indicates below that the SC and CC are to be on the same system.

    Read the article

  • Why does storage's performance change at various queue depths?

    - by Mxx
    I'm in the market for a storage upgrade for our servers. I'm looking at benchmarks of various PCIe SSD devices and in comparisons I see that IOPS change at various queue depths. How can that be and why is that happening? The way I understand things is: I have a device with maximum (theoretical) of 100k IOPS. If my workload consistently produces 100,001 IOPS, I'll have a queue depth of 1, am I correct? However, from what I see in benchmarks some devices run slower at lower queue depths, then speedup at depth of 4-64 and then slow down again at even larger depths. Isn't queue depths a property of OS(or perhaps storage controller), so why would that affect IOPS?

    Read the article

  • Subterranean IL: Fault exception handlers

    - by Simon Cooper
    Fault event handlers are one of the two handler types that aren't available in C#. It behaves exactly like a finally, except it is only run if control flow exits the block due to an exception being thrown. As an example, take the following method: .method public static void FaultExample(bool throwException) { .try { ldstr "Entering try block" call void [mscorlib]System.Console::WriteLine(string) ldarg.0 brfalse.s NormalReturn ThrowException: ldstr "Throwing exception" call void [mscorlib]System.Console::WriteLine(string) newobj void [mscorlib]System.Exception::.ctor() throw NormalReturn: ldstr "Leaving try block" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } fault { ldstr "Fault handler" call void [mscorlib]System.Console::WriteLine(string) endfault } Return: ldstr "Returning from method" call void [mscorlib]System.Console::WriteLine(string) ret } If we pass true to this method the following gets printed: Entering try block Throwing exception Fault handler and the exception gets passed up the call stack. So, the exception gets thrown, the fault handler gets run, and the exception propagates up the stack afterwards in the normal way. If we pass false, we get the following: Entering try block Leaving try block Returning from method Because we are leaving the .try using a leave.s instruction, and not throwing an exception, the fault handler does not get called. Fault handlers and C# So why were these not included in C#? It seems a pretty simple feature; one extra keyword that compiles in exactly the same way, and with the same semantics, as a finally handler. If you think about it, the same behaviour can be replicated using a normal catch block: try { throw new Exception(); } catch { // fault code goes here throw; } The catch block only gets run if an exception is thrown, and the exception gets rethrown and propagates up the call stack afterwards; exactly like a fault block. The only complications that occur is when you want to add a fault handler to a try block with existing catch handlers. Then, you either have to wrap the try in another try: try { try { // ... } catch (DirectoryNotFoundException) { // ... // leave.s as normal... } catch (IOException) { // ... throw; } } catch { // fault logic throw; } or separate out the fault logic into another method and call that from the appropriate handlers: try { // ... } catch (DirectoryNotFoundException ) { // ... } catch (IOException ioe) { // ... HandleFaultLogic(); throw; } catch (Exception e) { HandleFaultLogic(); throw; } To be fair, the number of times that I would have found a fault handler useful is minimal. Still, it's quite annoying knowing such functionality exists, but you're not able to access it from C#. Fortunately, there are some easy workarounds one can use instead. Next time: filter handlers.

    Read the article

  • generate parent child relation from the array to print a multi-level menu?

    - by Karthick Selvam
    How to get parent child relation from this array to print a multi-level menu $menus = array ( 0 => array ( 'id'=>0, 'check' => 1, 'display' =>'Arete Home', 'ordering' => -10, 'parent' => none, ), 1 => array ( 'id'=>1, 'check' => 1, 'display' => 'Submit Paper', 'ordering' => -10, 'parent' => 2, 'subordering' => -10, ), 2 => array ( 'id'=>2, 'check' => 1, 'display' => 'Buy Now', 'ordering' => -10, 'parent' => 1, 'subordering' => -10, ), 1461 => array ( 'id'=>1461, 'check' => 1, 'display' => 'Where are We?', 'ordering' => -10, 'parent' => 2, 'subordering' => -10, ), 1463 => array ( 'id'=>1463, 'check' => 1, 'display' =>' About Me?', 'ordering' => -10, 'parent' => 2, 'subordering' => -10, ), 1464 => array ( 'id'=>1464, 'check' => 1, 'display' => 'About You?', 'ordering' => -10, 'parent' => 2, 'subordering' => -10, ), 1465 => array ( 'id'=>1465, 'check' => 1, 'display' => 'About who?', 'ordering' => -10, 'parent' => 1, 'subordering' => -10, ), ); code sample: foreach($menus as $id=>$values) { $values['parent']=isset($values['parent']) ? $values['parent'] : 0; $menus[$values['parent']]['childs'][$id]=$values; unset($menus[$id]); } foreach($menus as $id1=>$value2) { $value2['parent']=isset($value2['parent']) ? $value2['parent'] : 0; $menus[$value2['parent']]['childs'][$id1]=$value2; unset($menus[$id1]); }

    Read the article

  • SQL SERVER – ERROR: FIX using Compatibility Level – Database diagram support objects cannot be installed because this database does not have a valid owner – Part 2

    - by pinaldave
    Earlier I wrote a blog post about how to resolve the error with database diagram. Today I faced the same error when I was dealing with a database which is upgraded from SQL Server 2005 to SQL Server 2008 R2. When I was searching for the solution online I ended up on my own earlier solution SQL SERVER – ERROR: FIX – Database diagram support objects cannot be installed because this database does not have a valid owner. I really found it interesting that I ended up on my own solution. However, the solution to the problem this time was a bit different. Let us see how we can resolve the same. Error: Database diagram support objects cannot be installed because this database does not have a valid owner. To continue, first use the Files page of the Database Properties dialog box or the ALTER AUTHORIZATION statement to set the database owner to a valid login, then add the database diagram support objects. Workaround / Fix / Solution : Follow the steps listed below and it should for sure solve your problem. (NOTE: Please try this for the databases upgraded from previous version. For everybody else you should just follow the steps mentioned here.) Select your database >> Right Click >> Select Properties Go to the Options In the Dropdown at right labeled “Compatibility Level” choose “SQL Server 2005(90)” Select FILE in left side of page In the OWNER box, select button which has three dots (…) in it Now select user ‘sa’ or NT AUTHORITY\SYSTEM and click OK. This will solve your problem. However, there is one very important note you must consider. When you change any database owner, there are always security related implications. I suggest you check your security policies before changing authorization. I did this to quickly solve my problem on my development server. If you are on production server, you may open yourself to potential security compromise. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • preview form using javascript in popup

    - by user1015309
    please I need some help in previewing a form in popup. I have a form, quite big, so I added the option of preview to show as popup. The lightbox form popup works well, but the problem I now have is function passform ()passing the inputs(textfield, select, checkbox, radio) into the popup page for preview on Click(). Below are my javascript and html codes. I left the css and some html out, because I think they're not needed. I will appreciate your help. Thank you The Javascript function gradient(id, level) { var box = document.getElementById(id); box.style.opacity = level; box.style.MozOpacity = level; box.style.KhtmlOpacity = level; box.style.filter = "alpha(opacity=" + level * 100 + ")"; box.style.display="block"; return; } function fadein(id) { var level = 0; while(level <= 1) { setTimeout( "gradient('" + id + "'," + level + ")", (level* 1000) + 10); level += 0.01; } } // Open the lightbox function openbox(formtitle, fadin) { var box = document.getElementById('box'); document.getElementById('shadowing').style.display='block'; var btitle = document.getElementById('boxtitle'); btitle.innerHTML = formtitle; if(fadin) { gradient("box", 0); fadein("box"); } else { box.style.display='block'; } } // Close the lightbox function closebox() { document.getElementById('box').style.display='none'; document.getElementById('shadowing').style.display='none'; } //pass form fields into variables var divexugsotherugsexams1 = document.getElementById('divexugsotherugsexams1'); var exugsotherugsexams1 = document.form4.exugsotherugsexams1.value; function passform() { divexugsotherugsexams1.innerHTML = document.form4.exugsotherugsexams1.value; } The HTML(with just one text field try): <p><input name="submit4" type="submit" class="button2" id="submit4" value="Preview Note" onClick="openbox('Preview Note', 1)"/> </p> <div id="shadowing"></div> <div id="box"> <span id="boxtitle"></span> <div id="divexugsotherugsexams1"></div> <script>document.write('<PARAM name="SRC" VALUE="'+exugsotherugsexams1+'">')</script> <a href="#" onClick="closebox()">Close</a> </div>

    Read the article

  • How to know the level of a symlink in linux?

    - by ???
    For example, if a symlink a -> b b -> c c -> d say, the symlink level of a is 3. Then, is there any utility to get this info? And, also I want to get the expansion detail of a symlink, which will show me something like: 1. /abc/xyz is expanded to /abc/xy/z (lrwx--x--x root root) 2. /abc/xy/z is expanded to /abc/xy-1.3.2/z (lrwx--x--x root root) 3. /abc/xy-1.3.2/z is expanded to /abc/xy-1.3.2/z-4.6 (lrwx--x--x root root) 4. /abc/xy-1.3.2/z-4.6 is expanded to /storage/121/43/z_4_6 (lrwx--x--x root root) 5. /storage/121/43/z_4_6 is expanded to /media/kitty_3135/43/z_4_6 (lrwx--x--x root root) So I can diagnostic with the symlinks. Any idea?

    Read the article

  • First order logic formula

    - by user177883
    R(x) is a red block B(x) is a blue block T(x,y) block x is on top of block y Question: Write a formula asserting that if no red block is on top of a red block then no red block is on top of itself. My answer: (Ax)(Ay)(R(x) and R(y) - ~T(x,y))-(Ax)(R(x)- ~T(x,x)) A = For all

    Read the article

  • Recursive t-sql query

    - by stackoverflowuser
    Hi I have a table as shown below. ID ParentID Node Name Node Type ------------------------------------------------------------------ 525 524 Root Area Level 1 526 525 C Area Level 2 527 525 A Area Level 2 528 525 D Area Level 2 671 525 E Area Level 2 660 527 B Area Level 3 672 671 F Area Level 3 How can i write a recursive t-sql query to generate below output? Output ("Root" node not required in the output): Node ID ----------------------- A 527 A/B 660 C 526 D 528 E 671 E/F 672 Thanks

    Read the article

  • How do I change a child's parent in NHibernate when cascade is delete-all-orphan?

    - by Daniel T.
    I have two entities in a bi-directional one-to-many relationship: public class Storage { public IList<Box> Boxes { get; set; } } public class Box { public Storage CurrentStorage { get; set; } } And the mapping: <class name="Storage"> <bag name="Boxes" cascade="all-delete-orphan" inverse="true"> <key column="Storage_Id" /> <one-to-many class="Box" /> </bag> </class> <class name="Box"> <many-to-one name="CurrentStorage" column="Storage_Id" /> </class> A Storage can have many Boxes, but a Box can only belong to one Storage. I have them mapped so that the one-to-many has a cascade of all-delete-orphan. My problem arises when I try to change a Box's Storage. Assuming I already ran this code: var storage1 = new Storage(); var storage2 = new Storage(); storage1.Boxes.Add(new Box()); Session.Create(storage1); Session.Create(storage2); The following code will give me an exception: // get the first and only box in the DB var existingBox = Database.GetBox().First(); // remove the box from storage1 existingBox.CurrentStorage.Boxes.Remove(existingBox); // add the box to storage2 after it's been removed from storage1 var storage2 = Database.GetStorage().Second(); storage2.Boxes.Add(existingBox); Session.Flush(); // commit changes to DB I get the following exception: NHibernate.ObjectDeletedException : deleted object would be re-saved by cascade (remove deleted object from associations) This exception occurs because I have the cascade set to all-delete-orphan. The first Storage detected that I removed the Box from its collection and marks it for deletion. However, when I added it to the second Storage (in the same session), it attempts to save the box again and the ObjectDeletedException is thrown. My question is, how do I get the Box to change its parent Storage without encountering this exception? I know one possible solution is to change the cascade to just all, but then I lose the ability to have NHibernate automatically delete a Box by simply removing it from a Storage and not re-associating it with another one. Or is this the only way to do it and I have to manually call Session.Delete on the box in order to remove it?

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >