Search Results

Search found 546 results on 22 pages for 'dat chu'.

Page 4/22 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Truncating a string while storing it in an array in c

    - by Nick
    I am trying to create an array of 20 character strings with a maximum of 17 characters that are obtained from a file named "words.dat". After that the program should truncate the string only showing the first 17 characters and completely ignore the rest of that string. However My question is: I am not quite sure how to accomplish this, can anyone give me some insight on how to accomplish this task? Here is my current code as is: #include <stdio.h> #include <stdlib.h> #define WORDS 20 #define LENGTH 18 char function1(char[WORDS][LENGTH]); int main( void ) { char word_array [WORDS] [LENGTH]; function1(word_array); return ( 0 ) ; } char function1(char word_array[WORDS][LENGTH]) { FILE *wordsfile = fopen("words.dat", "r"); int i = 0; if (wordsfile == NULL) printf("\nwords.dat was not properly opened.\n"); else { for (i = 0; i < WORDS; i++) { fscanf(wordsfile, "%17s", word_array[i]); printf ("%s \n", word_array[i]); } fclose(wordsfile); } return (word_array[WORDS][LENGTH]); } words.dat file: Ninja DragonsFury failninja dragonsrage leagueoflegendssurfgthyjnu white black red green yellow green leagueoflegendssughjkuj dragonsfury Sword sodas tiger snakes Swords Snakes sage Sample output: blahblah@fang:~>a.out Ninja DragonsFury failninja dragonsrage leagueoflegendssu rfgthyjnu white black red green yellow green leagueoflegendssu ghjkuj dragonsfury Sword sodas tiger snakes Swords blahblah@fang:~> What will be accomplished afterwards with this program is: After function1 works properly I will then create a second function name "function2" that will look throughout the array for matching pairs of words that match "EXACTLY" including case . After I will create a third function that displays the 20 character strings from the words.dat file that I previously created and the matching words.

    Read the article

  • Howto read only one line via c++ from a data

    - by Markus Hupfauer
    i tryed to read the first line of a .dat data, but when i tryed to give to text, wich was saved in the .dat data, it print out the whole data, not only one line. the tool is also not looking after breaks or spaces :( Im using the following code: //Vocabel.dat wird eingelesen ifstream f; // Datei-Handle string s; f.open("Vocabeln.dat", ios::in); // Öffne Datei aus Parameter while (!f.eof()) // Solange noch Daten vorliegen { getline(f, s); // Lese eine Zeile cout << s; } f.close(); // Datei wieder schließen getchar(); . So could u help me please ? . Thanks a lot Markus

    Read the article

  • Exception message (Python 2.6)

    - by TurboJupi
    If I want to open binary file (in Python 2.6), that doesn't exists, program exits with an error and prints this: Traceback (most recent call last): File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 4, in <module> pkl_file = open('monitor.dat', 'rb') IOError: [Errno 2] No such file or directory: 'monitor.dat' I can handle this with 'try-except', like: try: pkl_file = open('monitor.dat', 'rb') monitoring_pickle = pickle.load(pkl_file) pkl_file.close() except Exception: print 'No such file or directory' Does anybody know, how could I, in caught Exception, print the following line? File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 11, in <module> pkl_file = open('monitor.dat', 'rb') So, program would not exits, and I would have useful information.

    Read the article

  • Howto plot two cumulative frequency graph together

    - by neversaint
    I have data that looks like this: #val Freq1 Freq2 0.000 178 202 0.001 4611 5300 0.002 99 112 0.003 26 30 0.004 17 20 0.005 15 20 0.006 11 14 0.007 11 13 0.008 13 13 ...many more lines.. Full data can be found here: http://dpaste.com/173536/plain/ What I intend to do is to have a cumulative graph with "val" as x-axis with "Freq1" & "Freq2" as y-axis, plot together in 1 graph. I have this code. But it creates two plots instead of 1. dat <- read.table("stat.txt",header=F); val<-dat$V1 freq1<-dat$V2 freq2<-dat$V3 valf1<-rep(val,freq1) valf2<-rep(val,freq2) valfreq1table<- table(valf1) valfreq2table<- table(valf2) cumfreq1=c(0,cumsum(valfreq1table)) cumfreq2=c(0,cumsum(valfreq2table)) plot(cumfreq1, ylab="CumFreq",xlab="Loglik Ratio") lines(cumfreq1) plot(cumfreq2, ylab="CumFreq",xlab="Loglik Ratio") lines(cumfreq2) What's the right way to approach this?

    Read the article

  • Reading binary file with Octave

    - by Anthony Blake
    I'm trying to a binary file consisting of floats with Octave (on OS X), but I'm getting the following error: octave-3.2.3:2> load Input.dat R -binary error: load: failed to read matrix from file `Input.dat' The file was written like so: std::ofstream fout("Input.dat", std::ios::trunc | std::ios::binary); fout.write(reinterpret_cast<char*>(Buf), N*sizeof(double)); fout.close(); Any idea what could be going wrong here?

    Read the article

  • model.matrix() with na.action=NULL?

    - by Vincent
    I have a formula and a data frame, and I want to extract the model.matrix(). However, I need the resulting matrix to include the NAs that were found in the original dataset. If I were to use model.frame() to do this, I would simply pass it na.action=NULL. However, the output I need is of the model.matrix() format. Specifically, I need only the right-hand side variables, I need the output to be a matrix (not a data frame), and I need factors to be converted to a series of dummy variables. I'm sure I could hack something together using loops or something, but I was wondering if anyone could suggest a cleaner and more efficient workaround. Thanks a lot for your time! And here's an example: dat <- data.frame(matrix(rnorm(20),5,4), gl(5,2)) dat[3,5] <- NA names(dat) <- c(letters[1:4], 'fact') ff <- a ~ b + fact # This omits the row with a missing observation on the factor model.matrix(ff, dat) # This keeps the NA, but it gives me a data frame and does not dichotomize the factor model.frame(ff, dat, na.action=NULL) Here is what I would like to obtain: (Intercept) b fact2 fact3 fact4 fact5 1 1 0.7266086 0 0 0 0 2 1 -0.6088697 0 0 0 0 3 NA 0.4643360 NA NA NA NA 4 1 -1.1666248 1 0 0 0 5 1 -0.7577394 0 1 0 0 6 1 0.7266086 0 1 0 0 7 1 -0.6088697 0 0 1 0 8 1 0.4643360 0 0 1 0 9 1 -1.1666248 0 0 0 1 10 1 -0.7577394 0 0 0 1

    Read the article

  • Storage subsystem borking after server restart (all on a Parallel SCSI bus)

    - by Dat Chu
    I have a server (with a SCSI HBA) connected to two Promise VTrak M310p RAID enclosure on the same bus. Everything is working fine until I have to restart my server. Once restarted, the server can no longer communicate with the enclosures: lots of read errors and bus resets. I have to turn off both enclosure, then turn off the server, then turn on the enclosure, then turn on the server for things to work. I don't believe this is the normal behavior, what could I be missing?

    Read the article

  • Can I get redundancy with a JBOD storage subsystem

    - by Dat Chu
    I have a Promise Technology J610S. This is a JBOD subsystem. Is it possible for me to buy a SAS hardware RAID controller and provide some type of redundancy for these drives? I am unsure whether I will use Linux or Windows yet so an answer with enumeration for both would be highly appreciated. One solution that I thought of was: if my J610s can export each drive as a target, my server will simply see 16 drives. The RAID controller can then perform the RAID5/RAID6 if I want.

    Read the article

  • Migrate ASP.Net web site from IIS6 to IIS7

    - by David.Chu.ca
    I have to migrate an ASP.Net web site from IIS6 to IIS7. I tried to copy the all files for a web site from IIS6 (c:\inetpub\wwwroot\MySite) to another box with Windows Server 2008 R2 where IIS7 is the default web server. However, the simply copy seems not working. Should I rebuild the web site for IIS7 or should I make changes on the new box with IIS7 such as web.config? Thanks for the comments. Further investigation I found that http handers seems caused exception: <!--httpHandlers> <add path="Reserved.ReportViewerWebControl.axd" verb="*" type="Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" validate="false"/> </httpHandlers--> After I comment out the above handler in web.config, the web page works fine. This is just my initial test. I am not sure if I should rebuild the web site from source codes or not. If so, do I need to specify for IIS7?

    Read the article

  • Are there any drawbacks to the Major.Minor.YMDD.Build version strategy?

    - by Chu
    I'm trying to come up with a good version strategy to fit our specific needs. We've proposed settling on this and I wanted to ask the question to see if anyone's experience would suggest avoiding this or altering it in any way. Here's our proposal: Versions are released in this format: MAJOR.MINOR.YMDD.BN. Here it is broken out: MAJOR & MINOR are typical; we'll increase MINOR when we feel code and new feature sets warrants it; once every few months most likely. MAJOR will increase ~yearly. YMDD: Y will be the last digit of the current year, so "1" for 2011, "2" for 2012, etc. A non-padded month will be used to keep the number smaller (9 instead of 09 for example). DD of course is the day, padded with a zero for days under 10. BN: BN is the build number and increases by one anytime we make a change to a branch of the code represented by the build, for example: If were to make a build today, our release would be version 5.0.1707.1. I release to QA today and 3 days from now QA finds that a change broke the save functionality on a page. Instead of me changing our current development code, I'd go back to the code that I used to create version 5.0.1707.1, make the fix there, then increase the BN portion of the version and would then re-release 5.0.1707.2 back to QA. In short, anytime a change is made to a branched version that isn't the active dev branch, we'd use the original version number and increase only the BN portion (even if the change happened 3 days, 3 weeks or 3 months from the initial release of that version). Anytime we make a new release from our Active dev branch, we'd come up with a new version based on the M/D of the release using the outlined strategy. We do this once every 2-3 weeks. Are there holes or pitfalls with this? If so, what are they? Thanks EDIT To clarify one point that I didn't get out very well - Oct/Nov/Dec will be two digits, it's only the year that won't be. So 9 for Sept, 10 for Oct, 11 for Nov, etc.

    Read the article

  • What are some Java patterns well-suited for fast, algorithmic coding?

    - by Casey Chu
    I'm in college, and I've recently started competing in programming competitions with my friends. These competitions involve solving algorithmic problems quickly. It's a lot of fun, but there's one problem: I'm forced to use Java. (My teammates use Java.) Background: I'm a self-taught JavaScript programmer, and it hurts to write Java code. I find it very verbose and inflexible, and I feel slowed down when having to declare types and decide which of the eighty list data structure to use. I'm also frustrated about the lack of functional programming features and how verbose using regular expressions, arrays, and dictionaries are. As an example, consider the problem of finding the length of the longest string of consecutive characters in a given string. So the string XX22BBBBccXX222 would give 4, for the string of four Bs. In Java, I'd have to loop through and manually count characters and manually keep track of the maximum. (That's at least as far as I'm aware -- I'm not as familiar with Java as I am with JavaScript.) In JavaScript, I'd find it like this: var max = Math.max.apply(Math, str.match(/(.)\1*/g).map(function (s) { return s.length; })); Much quicker and simpler, in my book. The question: what are some Java features, techniques, or patterns well-suited for fast, algorithmic coding?

    Read the article

  • Extract and view Outlook contacts attachment sent to Gmail

    - by matt wilkie
    A friend forwarded a contact list to my gmail account from Outlook (2007 or 2010, not sure which). I can see there is an attachment in gmail but when I save it to my local drive it's just a plain text file containing the text This attachment is a MAPI 1.0 embedded message and is not supported by this mail system. If I use gmail's "show original message" it contains in part: This is a multipart message in MIME format. ------=_NextPart_000_0016_01CC6656.CE12F030 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit ------=_NextPart_000_0016_01CC6656.CE12F030 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="winmail.dat" eJ8+Ih0VAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEIgAcAGAAAAElQTS5NaWNy b3NvZnQgTWFpbC5Ob3RlADEIAQgABQAEAAAAAAAAAAAAAQkABAACAAAAAAAAAAEDkAYASAgAACgA --8<---snip---8<-- GUC/9NKH95rABgMA/g8HAAAAAwANNP0/pQ4DAA80/T+lDvAm ------=_NextPart_000_0016_01CC6656.CE12F030-- How do I save the attached winmail.dat properly, and open the winmail.dat and extract the contact list? I'm running Windows 7 x64, but have access to an ubuntu linux vmware appliance if needed. I have Outlook 2010, but can't use it to connect directly to gmail as pop3 and imap are blocked by the corporate firewall.

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • rsync to windows (cygwin)

    - by abergmeier
    We have a windows file storage (don't ask) and now I want to rsync with the machine from Windows, Mac and Linux. So I installed freeSSHd (login shell is set to C:/cygwin64/bin/sh.exe), set up certificates and testing from Linux the test.dat has 0 bytes: ssh myuser@winmachinename "C:/cygwin64/bin/true.exe" > test.dat Even double checking with actual output works fine: ssh myuser@winmachinename "C:/cygwin64/bin/ls.exe" > test.dat Now, when I call rsync: rsync --progress -avz -e ssh myuser@winmachinename:/c/Users ~/test it fails with: protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(174) [Receiver=3.1.0] As far as reading the docs, this should not happen, when the first test is successful!? I am by now out of ideas - any recommendations how to debug this? EDIT: | OS | rsync version | |:--------------|:------------------------------------------| | Windows | rsync version 3.0.9 protocol version 30 | | Linux | rsync version 3.1.0 protocol version 31 |

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Cannot delete apt-fast for a clean install

    - by colby
    This is my problem: $ destroy apt-fast [sudo] password for colbyryptos: Reading package lists... Done Building dependency tree Reading state information... Done Package apt-fast is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried sudo rm /var/lib/dpkg/lock, followed by sudo dpkg --configure -a. It then gives me this $ sudo dpkg --configure -a [sudo] password for colbyryptos: Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db

    Read the article

  • What files should be excluded from a complete Windows backup?

    - by tro
    I'm starting to use CrashPlan to backup my Win 7 PC. I've got it writing to my external HD (for quick local restores) and to CrashPlan Central (for offsite storage). I'd like to backup my entire C:\ drive (the only partition) in a way that: Preserves all of my installed software and configuration, but Avoids backing up log files and other ephemeral / temporary files that are regenerated during normal operation of the OS. Which files and/or directories should I be excluding from backups? I'd like to make this a community wiki, so that we could all contribute towards a definitive list. Here's a list of regular expressions identifying the directories and files that CrashPlan excludes on Windows by default listed at http://support.crashplan.com/doku.php/articles/admin_excludes: .*/(?:42|\d{8,})/(?:cp|~).* (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.* .*\.part .*/iPhoto Library/iPod Photo Cache/.* .*\.cprestoretmp.* *\.rbf :/Config\\.Msi.* .*/Google/Chrome/.*cache.* .*/Mozilla/Firefox/.*cache.* .*\$RECYCLE\.BIN/.* .*/System Volume Information/.* .*/RECYCLER/.* .*/I386.* .*/pagefile.sys .*/MSOCache.* .*UsrClass\.dat\.LOG .*UsrClass\.dat .*/Temporary Internet Files/.* (?i).*/ntuser.dat.* .*/Local Settings/Temp.* .*/AppData/Local/Temp.* .*/AppData/Temp.* .*/Windows/Temp.* (?i).*/Microsoft.*/Windows/.*\.log .*/Microsoft.*/Windows/Cookies.* .*/Microsoft.*/RecoveryStore.* (?i).:/Config\\.Msi.* (?i).*\\.rbf .*/Windows/Installer.* Other excludes: .*\.(class|obj) .*/hiberfil.sys (?i).*\.tmp (?i).*/temp/ (?i).*/tmp/ .*Thumbs\.db .*/Local Settings/History/ .*/NetHood/ .*/PrintHood/ .*/Cookies/ .*/Recent/ .*/SendTo/

    Read the article

  • How to avoid loop limitation in a openvz container?

    - by mat.viguier
    On a openVZ containing Deb7 I need to lock the maximum size of a folder, which is used to upload on a php based web server. The directory is synced, so I have to lock the maxsize. MAXSIZE should be upgradable by adding some physical disk later ... I want to use a file as a block device for a file system. So I have done : dd if=/dev/zero of=/disk2/filesystem.dat bs=1M count=100 Then, I made the filesystem on it mkfs.ext4 filesystem.dat Then I tried to mount it : mkdir /opt/filesystem ; mount /disk2/filesystem.dat /opt/filesystem My OpenVZ (it is on a VPS) has no loop module in the kernerl. So I got Could not find any loop device as usual under OpenVz So i think I have to use FUSE, but I really do not know HOW .... Any idea on locking the size of directory under OpenVZ ?

    Read the article

  • Exporting DataTable using FileHelpers

    - by Dat
    I want to export the contents of a DataTable to a text delimited file using FileHelpers, is this possible? Here is what I have so far: // dt is a DataTable with Rows in it DelimitedClassBuilder cb = new DelimitedClassBuilder("MyClassName", "|", dt); Type t = cb.CreateRecordClass(); FileHelperEngine engine = new FileHelperEngine(t); I have to convert the contents of dt to an array of type "MyClassName" but I'm not sure how to do that? I know there is a FileDataLink class but none of them work with DataTable (or even a DataSet).

    Read the article

  • stdio's remove() not always deleting on time.

    - by Kyte
    For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7. Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen(). FILE *archivo, *antiguo; remove("IndiceNecesidades.old"); // This randomly fails to work in time. rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails. antiguo = fopen("IndiceNecesidades.old", "rb"); // But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done). archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped. Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally. The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine. // I'm yet to see this snippet fail. FILE *antiguo, *archivo; remove("Necesidades.old"); rename("Necesidades.dat", "Necesidades.old"); antiguo = fopen("Necesidades.old", "rb"); archivo = fopen("Necesidades.dat", "wb"); Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)

    Read the article

  • Google I/O 2010 - Fireside chat w/ Android handset partners

    Google I/O 2010 - Fireside chat w/ Android handset partners Google I/O 2010 - Fireside chat with Android handset manufacturers Fireside Chats, Android Lori Fraleigh (Motorola), Bill Maggs (Sony Ericsson), Joon Kang (LGE), Ciaran Rochford (Samsung), Eric Chu (Google; moderator) Come join us for a fireside chat with the top Android handset manufacturers. Hear about the types of devices being planned for 2010 and get your device-specific questions answered. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 01:02:57 More in Science & Technology

    Read the article

  • Séminaires techniques : La modélisation UML avec Enterprise Architect, par Objet Direct, en avril à

    Séminaires techniques : La modélisation UML avec Enterprise Architect, par Objet Direct Le 7 avril à Lyon, le 8 avril à Grenoble, le 14 avril à Paris (9h-11h) Avec les retours d'expérience projets menés par Objet Direct chez EDF, PSA, au Conseil d'Etat et au CHU de Grenoble Pour des modèles UML utiles, à jour et partagés : Enterprise Architect est un AGL de dernière génération, qui rend accessible à tous l'utilisation d'UML comme langage commun entre les équipes au sein de la DSI : maîtrise d'ouvrage, maîtrise d'oeuvre, équipes de développement et de maintenance partagent la même vision du SI et des fonctionnalités des applicatifs, grâce à des modèles qui vont à l'essentiel, consultés et mis à jo...

    Read the article

  • Extract InstallSHIELD's CAB files when there is no HDR files?

    - by user433531
    layout.bin setup.lid _sys1.cab _user1.cab DATA.TAG data1.cab SETUP.INI setup.ins _INST32I.EX_ SETUP.EXE _ISDEL.EXE _SETUP.DLL lang.dat os.dat I want to extract an InstallSHIELD's 5 install package and above is the list of files in "data1" folder. However there is no *.hdr files so I can't extact the CAB files using tools on Internet, even though the package is still able to be installed without any error. Can anybody give me a suggestion for this please?

    Read the article

  • Transferring PuTTY session data

    - by toolkit
    My Windows NT account name was changed, and when starting PuTTY it now appears that my saved session information has been lost. The FAQ suggests that PuTTY sessions should be stored in HKEY_CURRENT_USER\Software\SimonTatham\PuTTY. Wikipedia explains that HKCU maps to NTUSER.DAT and USRCLASS.DAT under the current user's Desktop and Settings folder. I still have these files for my old account name, but I'm guessing there is no easy way to extract data from these files?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >