Search Results

Search found 161595 results on 6464 pages for 'user defined data type'.

Page 81/6464 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Extract all related class type aliasing and enum into one file or not

    - by Chen OT
    I have many models in my project, and some other classes just need the class declaration and pointer type aliasing. It does not need to know the class definition, so I don't want to include the model header file. I extract all the model's declaration into one file to let every classes reference one file. model_forward.h class Cat; typedef std::shared_ptr<Cat> CatPointerType; typedef std::shared_ptr<const Cat> CatConstPointerType; class Dog; typedef std::shared_ptr<Dog> DogPointerType; typedef std::shared_ptr<const Dog> DogConstPointerType; class Fish; typedef std::shared_ptr<Fish> FishPointerType; typedef std::shared_ptr<const Fish> FishConstPointerType; enum CatType{RED_CAT, YELLOW_CAT, GREEN_CAT, PURPLE_CAT} enum DogType{HATE_CAT_DOG, HUSKY, GOLDEN_RETRIEVER} enum FishType{SHARK, OCTOPUS, SALMON} Is it acceptable practice? Should I make every unit, which needs a class declaration, depends on one file? Does it cause high coupling? Or I should put these pointer type aliasing and enum definition inside the class back? cat.h class Cat { typedef std::shared_ptr<Cat> PointerType; typedef std::shared_ptr<const Cat> ConstPointerType; enum Type{RED_CAT, YELLOW_CAT, GREEN_CAT, PURPLE_CAT} ... }; dog.h class Dog { typedef std::shared_ptr<Dog> PointerType; typedef std::shared_ptr<const Dog> ConstPointerType; enum Type{HATE_CAT_DOG, HUSKY, GOLDEN_RETRIEVER} ... } fish.h class Fish { ... }; Any suggestion will be helpful.

    Read the article

  • Partner Webcast - Oracle Data Integration Competency Center (DICC): A Niche Market for services

    - by Thanos Terentes Printzios
    Market success now depends on data integration speed. This is why we collected all best practices from the most advanced IT leaders, simply to prove that a Data Integration competency center should be the primary new IT team you should establish. This is a niche market with unlimited potential for partners becoming, the much needed, data integration services provider trusted by customers. We would like to elaborate with OPN Partners on the Business Value Assessment and Total Economic Impact of the Data Integration Platform for End Users, while justifying re-organizing your IT services teams. We are happy to share our research on: The Economical impact of data integration platform/competency center. Justifying strongest reasons and differentiators, using numeric analysis and best-practice in customer case studies from specific industries Utilizing diagnostics and health-check analysis in building a business case for your customers What exactly is so special in the technology of Oracle Data Integration Impact of growing data volume and amount of data sources Analysis of usual solutions that are being implemented so far, addressing key challenges and mistakes During this partner webcast we will balance business case centric content with extensive numerical ROI analysis. Join us to find out how to build a unified approach to moving/sharing/integrating data across the enterprise and why this is an important new services opportunity for partners. Agenda: Data Integration Competency Center Oracle Data Integration Solution Overview Services Niche Market For OPN Summary Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Milomir Vojvodic, EMEA Senior Business Development Manager for Oracle Data Integration Product Group Date: Thursday, September 4th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Today For any questions please contact us at [email protected]

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • WCF Data Service BeginSaveChanges not saving changes in Silverlight app

    - by Enigmativity
    I'm having a hell of a time getting WCF Data Services to work within Silverlight. I'm using the VS2010 RC. I've struggled with the cross domain issue requiring the use of clientaccesspolicy.xml & crossdomain.xml files in the web server root folder, but I just couldn't get this to work. I've resorted to putting both the Silverlight Web App & the WCF Data Service in the same project to get past this issue, but any advice here would be good. But now that I can actually see my data coming from the database and being displayed in a data grid within Silverlight I thought my troubles were over - but no. I can edit the data and the in-memory entity is changing, but when I call BeginSaveChanges (with the appropriate async EndSaveChangescall) I get no errors, but no data updates in the database. Here's my WCF Data Services code: public class MyDataService : DataService<MyEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { base.OnStartProcessingRequest(args); HttpContext context = HttpContext.Current; HttpCachePolicy c = HttpContext.Current.Response.Cache; c.SetCacheability(HttpCacheability.ServerAndPrivate); c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60)); c.VaryByHeaders["Accept"] = true; c.VaryByHeaders["Accept-Charset"] = true; c.VaryByHeaders["Accept-Encoding"] = true; c.VaryByParams["*"] = true; } } I've pinched the OnStartProcessingRequest code from Scott Hanselman's article Creating an OData API for StackOverflow including XML and JSON in 30 minutes. Here's my code from my Silverlight app: private MyEntities _wcfDataServicesEntities; private CollectionViewSource _customersViewSource; private ObservableCollection<Customer> _customers; private void UserControl_Loaded(object sender, RoutedEventArgs e) { if (!System.ComponentModel.DesignerProperties.GetIsInDesignMode(this)) { _wcfDataServicesEntities = new MyEntities(new Uri("http://localhost:7156/MyDataService.svc/")); _customersViewSource = this.Resources["customersViewSource"] as CollectionViewSource; DataServiceQuery<Customer> query = _wcfDataServicesEntities.Customer; query.BeginExecute(result => { _customers = new ObservableCollection<Customer>(); Array.ForEach(query.EndExecute(result).ToArray(), _customers.Add); Dispatcher.BeginInvoke(() => { _customersViewSource.Source = _customers; }); }, null); } } private void button1_Click(object sender, RoutedEventArgs e) { _wcfDataServicesEntities.BeginSaveChanges(r => { var response = _wcfDataServicesEntities.EndSaveChanges(r); string[] results = new[] { response.BatchStatusCode.ToString(), response.IsBatchResponse.ToString() }; _customers[0].FinAssistCompanyName = String.Join("|", results); }, null); } The response string I get back data binds to my grid OK and shows "-1|False". My intent is to get a proof-of-concept working here and then do the appropriate separation of concerns to turn this into a simple line-of-business app. I've spent hours and hours on this. I'm being driven insane. Any ideas how to get this working?

    Read the article

  • Serial port : Read data problem, not reading complete data

    - by Anuj Mehta
    Hi I have an application where I am sending data via serial port from PC1 (Java App) and reading that data in PC2 (C++ App). The problem that I am facing is that my PC2 (C++ App) is not able to read complete data sent by PC1 i.e. from my PC1 I am sending 190 bytes but PC2 is able to read close to 140 bytes though I am trying to read in a loop. Below is code snippet of my C++ App Open the connection to serial port serialfd = open( serialPortName.c_str(), O_RDWR | O_NOCTTY | O_NDELAY); if (serialfd == -1) { /* * Could not open the port. */ TRACE << "Unable to open port: " << serialPortName << endl; } else { TRACE << "Connected to serial port: " << serialPortName << endl; fcntl(serialfd, F_SETFL, 0); } Configure the Serial Port parameters struct termios options; /* * Get the current options for the port... */ tcgetattr(serialfd, &options); /* * Set the baud rates to 9600... */ cfsetispeed(&options, B38400); cfsetospeed(&options, B38400); /* * 8N1 * Data bits - 8 * Parity - None * Stop bits - 1 */ options.c_cflag &= ~PARENB; options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; options.c_cflag |= CS8; /* * Enable hardware flow control */ options.c_cflag |= CRTSCTS; /* * Enable the receiver and set local mode... */ options.c_cflag |= (CLOCAL | CREAD); // Flush the earlier data tcflush(serialfd, TCIFLUSH); /* * Set the new options for the port... */ tcsetattr(serialfd, TCSANOW, &options); Now I am reading data const int MAXDATASIZE = 512; std::vector<char> m_vRequestBuf; char buffer[MAXDATASIZE]; int totalBytes = 0; fcntl(serialfd, F_SETFL, FNDELAY); while(1) { bytesRead = read(serialfd, &buffer, MAXDATASIZE); if(bytesRead == -1) { //Sleep for some time and read again usleep(900000); } else { totalBytes += bytesRead; //Add data read to vector for(int i =0; i < bytesRead; i++) { m_vRequestBuf.push_back(buffer[i]); } int newBytesRead = 0; //Now keep trying to read more data while(newBytesRead != -1) { //clear contents of buffer memset((void*)&buffer, 0, sizeof(char) * MAXDATASIZE); newBytesRead = read(serialfd, &buffer, MAXDATASIZE); totalBytes += newBytesRead; for(int j = 0; j < newBytesRead; j++) { m_vRequestBuf.push_back(buffer[j]); } }//inner while break; } //while

    Read the article

  • Join and sum not compatible matrices through data.table

    - by leodido
    My goal is to "sum" two not compatible matrices (matrices with different dimensions) using (and preserving) row and column names. I've figured this approach: convert the matrices to data.table objects, join them and then sum columns vectors. An example: > M1 1 3 4 5 7 8 1 0 0 1 0 0 0 3 0 0 0 0 0 0 4 1 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 > M2 1 3 4 5 8 1 0 0 1 0 0 3 0 0 0 0 0 4 1 0 0 0 0 5 0 0 0 0 0 8 0 0 0 0 0 > M1 %ms% M2 1 3 4 5 7 8 1 0 0 2 0 0 0 3 0 0 0 0 0 0 4 2 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 This is my code: M1 <- matrix(c(0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0), byrow = TRUE, ncol = 6) colnames(M1) <- c(1,3,4,5,7,8) M2 <- matrix(c(0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0), byrow = TRUE, ncol = 5) colnames(M2) <- c(1,3,4,5,8) # to data.table objects DT1 <- data.table(M1, keep.rownames = TRUE, key = "rn") DT2 <- data.table(M2, keep.rownames = TRUE, key = "rn") # join and sum of common columns if (nrow(DT1) > nrow(DT2)) { A <- DT2[DT1, roll = TRUE] A[, list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1), by = rn] } That outputs: rn X1 X3 X4 X5 X7 X8 1: 1 0 0 2 0 0 0 2: 3 0 0 0 0 0 0 3: 4 2 0 0 0 0 0 4: 5 0 0 0 0 0 0 5: 7 0 0 0 0 1 0 6: 8 0 0 0 0 0 0 Then I can convert back this data.table to a matrix and fix row and column names. The questions are: how to generalize this procedure? I need a way to automatically create list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1) because i want to apply this function to matrices which dimensions (and row/columns names) are not known in advance. In summary I need a merge procedure that behaves as described. there are other strategies/implementations that achieve the same goal that are, at the same time, faster and generalized? (hoping that some data.table monster help me) to what kind of join (inner, outer, etc. etc.) is assimilable this procedure? Thanks in advance. p.s.: I'm using data.table version 1.8.2 EDIT - SOLUTIONS @Aaron solution. No external libraries, only base R. It works also on list of matrices. add_matrices_1 <- function(...) { a <- list(...) cols <- sort(unique(unlist(lapply(a, colnames)))) rows <- sort(unique(unlist(lapply(a, rownames)))) out <- array(0, dim = c(length(rows), length(cols)), dimnames = list(rows,cols)) for (m in a) out[rownames(m), colnames(m)] <- out[rownames(m), colnames(m)] + m out } @MadScone solution. Used reshape2 package. It works only on two matrices per call. add_matrices_2 <- function(m1, m2) { m <- acast(rbind(melt(M1), melt(M2)), Var1~Var2, fun.aggregate = sum) mn <- unique(colnames(m1), colnames(m2)) rownames(m) <- mn colnames(m) <- mn m } BENCHMARK (100 runs with microbenchmark package) Unit: microseconds expr min lq median uq max 1 add_matrices_1 196.009 257.5865 282.027 291.2735 549.397 2 add_matrices_2 13737.851 14697.9790 14864.778 16285.7650 25567.448 No need to comment the benchmark: @Aaron solution wins. I'll continue to investigate a similar solution for data.table objects. I'll add other solutions eventually reported or discovered.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • Migration Core Data error code 134130

    - by magichero
    I want to do migration with 2 core database. I have read apple developer document. For the first database, I add some attributes (string, integer and date properties) to new version database. And follow all steps, I have done migration successfully with the first once. But the second database, I also add attributes (string, integer, date, transformable and binary-data properties) to new version database. And follow all steps (like the first database) but system return an error (134130). Here is the code: if (persistentStoreCoordinator_) { PMReleaseSafely(persistentStoreCoordinator_); } // Notify NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:GCalWillMigrationNotification object:self]; // NSString *sourceStoreType = NSSQLiteStoreType; NSString *dataStorePath = [PMUtility dataStorePathForName:GCalDBWarehousePersistentStoreName]; NSURL *storeURL = [NSURL fileURLWithPath:dataStorePath]; BOOL storeExists = [[NSFileManager defaultManager] fileExistsAtPath:dataStorePath]; // NSError *error = nil; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; persistentStoreCoordinator_ = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:self.managedObjectModel]; [persistentStoreCoordinator_ addPersistentStoreWithType:sourceStoreType configuration:nil URL:storeURL options:options error:&error]; if (error != nil) { abort(); } error is not nil and below is log: Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldn\u2019t be completed. (Cocoa error 134130.)" UserInfo=0x856f790 {URL=file://localhost/Users/greensun/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/D10712DE-D9FE-411A-8182-C4F58C60EC6D/Library/Application%20Support/Promise%20Mail/GCalDBWarehouse.sqlite, metadata={type = immutable dict, count = 7, entries = 2 : {contents = "NSStoreModelVersionIdentifiers"} = {type = immutable, count = 1, values = ( 0 : {contents = ""} )} 4 : {contents = "NSPersistenceFrameworkVersion"} = {value = +386, type = kCFNumberSInt64Type} 6 : {contents = "NSStoreModelVersionHashes"} = {type = immutable dict, count = 2, entries = 0 : {contents = "SyncEvent"} = {length = 32, capacity = 32, bytes = 0xfdae355f55c13fbd0344415fea26c8bb ... 4c1721aadd4122aa} 1 : {contents = "ImportEvent"} = {length = 32, capacity = 32, bytes = 0x7676888f0d7eaff4d1f844343028ce02 ... 040af6cbe8c5fd01} } 7 : {contents = "NSStoreUUID"} = {contents = "51678BAC-CCFB-4D00-AF5C-8FA1BEDA6440"} 8 : {contents = "NSStoreType"} = {contents = "SQLite"} 9 : {contents = "_NSAutoVacuumLevel"} = {contents = "2"} 10 : {contents = "NSStoreModelVersionHashesVersion"} = {value = +3, type = kCFNumberSInt32Type} }, reason=Can't find model for source store} I try a lot of solutions but it does not work. I just add more attributes to 2 new version database, and succeed in migrating once. Thanks for reading and your help.

    Read the article

  • How do I make a function in SQL Server that accepts a column of data?

    - by brandon k
    I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic. I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way? Instead of calling: SELECT ejc_concatFormDetails(formuid, categoryName) I would like to make it work like: SELECT concatColumnValues(SELECT someColumn FROM SomeTable) Here is my function definition: FUNCTION [DNet].[ejc_concatFormDetails](@formuid AS int, @category as VARCHAR(75)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @returnData VARCHAR(1000) DECLARE @currentData VARCHAR(75) DECLARE dataCursor CURSOR FAST_FORWARD FOR SELECT data FROM DNet.ejc_FormDetails WHERE formuid = @formuid AND category = @category SET @returnData = '' OPEN dataCursor FETCH NEXT FROM dataCursor INTO @currentData WHILE (@@FETCH_STATUS = 0) BEGIN SET @returnData = @returnData + ', ' + @currentData FETCH NEXT FROM dataCursor INTO @currentData END CLOSE dataCursor DEALLOCATE dataCursor RETURN SUBSTRING(@returnData,3,1000) END As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar. How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?

    Read the article

  • Who writes the words? A rant with graphs.

    - by Roger Hart
    If you read my rant, you'll know that I'm getting a bit of a bee in my bonnet about user interface text. But rather than just yelling about the way the world should be (short version: no UI text would suck), it seemed prudent to actually gather some data. Rachel Potts has made an excellent first foray, by conducting a series of interviews across organizations about how they write user interface text. You can read Rachel's write up here. She presents the facts as she found them, and doesn't editorialise. The result is insightful, but impartial isn't really my style. So here's a rant with graphs. My method, and how it sucked I sent out a short survey. Survey design is one of my hobby-horses, and since some smartarse in the comments will mention it if I don't, I'll step up and confess: I did not design this one well. It was potentially ambiguous, implicitly excluded people, and since I only really advertised it on Twitter and a couple of mailing lists the sample will be chock full of biases. Regardless, these were the questions: What do you do? Select the option that best describes your role What kind of software does your organization make? (optional) In your organization, who writes the text on your software user interfaces? (for example: button names, static text, tooltips, and so on) Tick all that apply. In your organization who is responsible for user interface text? Who "owns" it? The most glaring issue (apart from question 3 being a bit broken) was that I didn't make it clear that I was asking about applications. Desktop, mobile, or web, I wouldn't have minded. In fact, it might have been interesting to categorize and compare. But a few respondents commented on the seeming lack of relevance, since they didn't really make software. There were some other issues too. It wasn't the best survey. So, you know, pinch of salt time with what follows. Despite this, there were 100 or so respondents. This post covers the overview, and you can look at the raw data in this spreadsheet What did people do? Boring graph number one: I wasn't expecting that. Given I pimped the survey on twitter and a couple of Tech Comms discussion lists, I was more banking on and even Content Strategy/Tech Comms split. What the "Others" specified: Three people chipped in with Technical Writer. Author, apparently, doesn't cut it. There's a "nobody reads the instructions" joke in there somewhere, I'm sure. There were a couple of hybrid roles, including Tech Comms and Testing, which sounds gruelling and thankless. There was also, an Intranet Manager, a Creative Director, a Consultant, a CTO, an Information Architect, and a Translator. That's a pretty healthy slice through the industry. Who wrote UI text? Boring graph number two: Annoyingly, I made this a "tick all that apply" question, so I can't make crude and inflammatory generalizations about percentages. This is more about who gets involved in user interface wording. So don't panic about the number of developers writing UI text. First off, it just means they're involved. Second, they might be good at it. What? It could happen. Ours are involved - they write a placeholder and flag it to me for changes. Sometimes I don't make any. It's also not surprising that there's so much UX in the mix. Some of that will be people taking care, and crafting an understandable interface. Some of it will be whatever text goes on the wireframe making it into production. I'm going to assume that's what happened at eBay, when their iPhone app purportedly shipped with the placeholder text "Some crappy content goes here". Ahem. Listing all 17 "other" responses would make this post lengthy indeed, but you can read them in the raw data spreadsheet. The award for the approach that sounds the most like a good idea yet carries the highest risk of ending badly goes to whoever offered up "External agencies using focus groups". If you're reading this, and that actually works, leave a comment. I'm fascinated. Who owned UI text Stop. Bar chart time: Wow. Let's cut to the chase, and by "chase", I mean those inflammatory generalizations I was talking about: In around 60% of cases the person responsible for user interface text probably lacks the relevant expertise. Even in the categories I count as being likely to have relevant skills (Marketing Copywriters, Content Strategists, Technical Authors, and User Experience Designers) there's a case for each role being unsuited, as you'll see in Rachel's blog post So it's not as simple as my headline. Does that mean that you personally, Mr Developer reading this, write bad button names? Of course not. I know nothing about you. It rather implies that as a category, the majority of people looking after UI text have neither communication nor user experience as their primary skill set, and as such will probably only be good at this by happy accident. I don't have a way of measuring those frequency of those accidents. What the Others specified: I don't know who owns it. I assume the project manager is responsible. "copywriters" when they wish to annoy me. the client's web maintenance person, often PR or MarComm That last one chills me to the bone. Still, at least nobody said "the work experience kid". You can see the rest in the spreadsheet. My overwhelming impression here is of user interface text as an unloved afterthought. There were fewer "nobody" responses than I expected, and a much broader split. But the relative predominance of developers owning and writing UI text suggests to me that organizations don't see it as something worth dedicating attention to. If true, that's bothersome. Because the words on the screen, particularly the names of things, are fundamental to the ability to understand an use software. It's also fascinating that Technical Authors and Content Strategists are neck and neck. For such a nascent discipline, Content Strategy appears to have made a mark on software development. Or my sample is skewed. But it feels like a bit of validation for my rant: Content Strategy is eating Tech Comms' lunch. That's not a bad thing. Well, not if the UI text is getting done well. And that's the caveat to this whole post. I couldn't care less who writes UI text, provided they consider the user and don't suck at it. I care that it may be falling by default to people poorly disposed to doing it right. And I care about that because so much user interface text sucks. The most interesting question Was one I forgot to ask. It's this: Does your organization have technical authors/writers? Like a lot of survey data, that doesn't tell you much on its own. But once we get a bit dimensional, it become more interesting. So taken with the other questions, this would have let me find out what I really want to know: What proportion of organizations have Tech Comms professionals but don't use them for UI text? Who writes UI text in their place? Why this happens? It's possible (feasible is another matter) that hundreds of companies have tech authors who don't work on user interfaces because they've empirically discovered that someone else, say the Marketing Copywriter, is better at it. And once we've all finished laughing, I'll point out that I've met plenty of tech authors who just aren't used to thinking about users at the point of need in the way UI text and embedded user assistance require. If you've got what I regard, perhaps unfairly, as the bad kind of tech author - the old-school kind with the thousand-page pdf and the grammar obsession - if you've got one of those then you probably are better off getting the UX folk or the copywriters to do your UI text. At the very least, they'll derive terminology from user research.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 2 – Table per Type (TPT)

    - by mortezam
    In the previous blog post you saw that there are three different approaches to representing an inheritance hierarchy and I explained Table per Hierarchy (TPH) as the default mapping strategy in EF Code First. We argued that the disadvantages of TPH may be too serious for our design since it results in denormalized schemas that can become a major burden in the long run. In today’s blog post we are going to learn about Table per Type (TPT) as another inheritance mapping strategy and we'll see that TPT doesn’t expose us to this problem. Table per Type (TPT)Table per Type is about representing inheritance relationships as relational foreign key associations. Every class/subclass that declares persistent properties—including abstract classes—has its own table. The table for subclasses contains columns only for each noninherited property (each property declared by the subclass itself) along with a primary key that is also a foreign key of the base class table. This approach is shown in the following figure: For example, if an instance of the CreditCard subclass is made persistent, the values of properties declared by the BillingDetail base class are persisted to a new row of the BillingDetails table. Only the values of properties declared by the subclass (i.e. CreditCard) are persisted to a new row of the CreditCards table. The two rows are linked together by their shared primary key value. Later, the subclass instance may be retrieved from the database by joining the subclass table with the base class table. TPT Advantages The primary advantage of this strategy is that the SQL schema is normalized. In addition, schema evolution is straightforward (modifying the base class or adding a new subclass is just a matter of modify/add one table). Integrity constraint definition are also straightforward (note how CardType in CreditCards table is now a non-nullable column). Another much more important advantage is the ability to handle polymorphic associations (a polymorphic association is an association to a base class, hence to all classes in the hierarchy with dynamic resolution of the concrete class at runtime). A polymorphic association to a particular subclass may be represented as a foreign key referencing the table of that particular subclass. Implement TPT in EF Code First We can create a TPT mapping simply by placing Table attribute on the subclasses to specify the mapped table name (Table attribute is a new data annotation and has been added to System.ComponentModel.DataAnnotations namespace in CTP5): public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } [Table("BankAccounts")] public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } [Table("CreditCards")] public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } If you prefer fluent API, then you can create a TPT mapping by using ToTable() method: protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.Entity<BankAccount>().ToTable("BankAccounts");     modelBuilder.Entity<CreditCard>().ToTable("CreditCards"); } Generated SQL For QueriesLet’s take an example of a simple non-polymorphic query that returns a list of all the BankAccounts: var query = from b in context.BillingDetails.OfType<BankAccount>() select b; Executing this query (by invoking ToList() method) results in the following SQL statements being sent to the database (on the bottom, you can also see the result of executing the generated query in SQL Server Management Studio): Now, let’s take an example of a very simple polymorphic query that requests all the BillingDetails which includes both BankAccount and CreditCard types: projects some properties out of the base class BillingDetail, without querying for anything from any of the subclasses: var query = from b in context.BillingDetails             select new { b.BillingDetailId, b.Number, b.Owner }; -- var query = from b in context.BillingDetails select b; This LINQ query seems even more simple than the previous one but the resulting SQL query is not as simple as you might expect: -- As you can see, EF Code First relies on an INNER JOIN to detect the existence (or absence) of rows in the subclass tables CreditCards and BankAccounts so it can determine the concrete subclass for a particular row of the BillingDetails table. Also the SQL CASE statements that you see in the beginning of the query is just to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type) TPT ConsiderationsEven though this mapping strategy is deceptively simple, the experience shows that performance can be unacceptable for complex class hierarchies because queries always require a join across many tables. In addition, this mapping strategy is more difficult to implement by hand— even ad-hoc reporting is more complex. This is an important consideration if you plan to use handwritten SQL in your application (For ad hoc reporting, database views provide a way to offset the complexity of the TPT strategy. A view may be used to transform the table-per-type model into the much simpler table-per-hierarchy model.) SummaryIn this post we learned about Table per Type as the second inheritance mapping in our series. So far, the strategies we’ve discussed require extra consideration with regard to the SQL schema (e.g. in TPT, foreign keys are needed). This situation changes with the Table per Concrete Type (TPC) that we will discuss in the next post. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • Handling multiple column data with Java

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time.

    Read the article

  • Flex / Flash builder : no returning data using database

    - by Tristan
    Hello, i'm following some flex tutorials everything's working as wanted expected for one thing : When i use my function getServerByBrand($brand) there is no returned data into my datagrid and i don't know why because it uses the same schema as getAllserver() which is working . I don't know whether it's cause by the function itselft or the configuration in flash builder : protected function RechercheGSP_clickHandler(event:MouseEvent):void { getServerByBrandResult.token = dbClass.getServerByBrand(SearchInput.text); } Here's what i've got in Data/Services : getServerByBrand(brand : string) : Object And finally the function : public function getServerByBrand($brand) { $stmt = mysqli_prepare($this->connection, "SELECT DISTINCT * FROM $this->tablename where GSP_nom=? "); $this->throwExceptionOnError(); mysqli_stmt_execute($stmt); $this->throwExceptionOnError(); $rows = array(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); while (mysqli_stmt_fetch($stmt)) { $row->timestamp = new DateTime($row->timestamp); $rows[] = $row; $row = new stdClass(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); } mysqli_stmt_free_result($stmt); mysqli_close($this->connection); return $rows; } I tested the settings with configure return type and it tells me : "the operation returned a primitive "object". test settings : Parameters (brand) / Input type (String) / Value (woop) To conclude, there is no returned object at all. Do you see the problem ? Thanks

    Read the article

  • How to determine the data type of a CvMat

    - by Chris
    When using the CvMat type, the type of data is crucial to keeping your program running. For example, depending on whether your data is type float or unsigned char, you would choose one of these two commands: cvmGet(mat, row, col); cvGetReal2D(mat, row, col); Is there a universal approach to this? If the wrong data type matrix is passed to these calls, they crash at runtime. This is becoming an issue, since a function I have defined is getting passed several different types of matrices. How do you determine the data type of a matrix so you can always access its data? I tried using the "type()" function as such. CvMat* tmp_ptr = cvCreateMat(t_height,t_width,CV_8U); std::cout << "type = " << tmp_ptr->type() << std::endl; This does not compile, saying "term does not evaluate to a function taking 0 arguments". If I remove the brackets after the word type, I get a type of 1111638032 EDIT minimal application that reproduces this... int main( int argc, char** argv ) { CvMat *tmp2 = cvCreateMat(10,10, CV_32FC1); std::cout << "tmp2 type = " << tmp2->type << " and CV_32FC1 = " << CV_32FC1 << " and " << (tmp2->type == CV_32FC1) << std::endl; } Output: tmp2 type = 1111638021 and CV_32FC1 = 5 and 0

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Type errors when using same name

    - by lykimq
    I have 3 files: 1) cpf0.ml type string = char list type url = string type var = string type name = string type symbol = | Symbol_name of name 2) problem.ml: type symbol = | Ident of string 3) test.ml open Problem;; open Cpf0;; let symbol b = function | Symbol_name n -> Ident n When I combine test.ml: ocamlc -c test.ml. I received an error: This expression has type Cpf0.name = char list but an expression was expected of type string Could you please help me to correct it? Thank you very much EDIT: Thank you for your answer. I want to explain more about these 3 files: Because I am working with extraction in Coq to Ocaml type: cpf0.ml is generated from cpf.v : Require Import String. Definition string := string. Definition name := string. Inductive symbol := | Symbol_name : name -> symbol. The code extraction.v: Set Extraction Optimize. Extraction Language Ocaml. Require ExtrOcamlBasic ExtrOcamlString. Extraction Blacklist cpf list. where ExtrOcamlString I opened: open Cpf0;; in problem.ml, and I got a new problem because in problem.ml they have another definition for type string This expression has type Cpf0.string = char list but an expression was expected of type Util.StrSet.elt = string Here is a definition in util.ml defined type string: module Str = struct type t = string end;; module StrOrd = Ord.Make (Str);; module StrSet = Set.Make (StrOrd);; module StrMap = Map.Make (StrOrd);; let set_add_chk x s = if StrSet.mem x s then failwith (x ^ " already declared") else StrSet.add x s;; I was trying to change t = string to t = char list, but if I do that I have to change a lot of function it depend on (for example: set_add_chk above). Could you please give me a good idea? how I would do in this case.

    Read the article

  • Generating dynamic data using Javascript

    - by methuselah
    Given that I have an array of alphabetical characters: var qwerty = [['q', 'w', 'e', 'r', 't', 'y', 'u', 'i', 'o', 'p'], ['a', 's', 'd', 'f', 'g', 'h', 'j', 'k', 'l'], ['z', 'x', 'c', 'v', 'b', 'n', 'm']]; How would I present them in three rows like on a conventional keyboard using JS without resorting to something like this: <input type='button' value='Q' id='bt1'/> <input type='button' value='W' id='bt2'/> <input type='button' value='E' id='bt3'/> <input type='button' value='R' id='bt4'/> <input type='button' value='T' id='bt5'/> <input type='button' value='Y' id='bt5'/> <input type='button' value='U' id='bt7'/> <input type='button' value='I' id='bt8'/> <input type='button' value='O' id='bt9'/> <input type='button' value='P' id='bt10'/> <br> <input type='button' value='A' id='bt11'/> <input type='button' value='S' id='bt12'/> <input type='button' value='D' id='bt13'/> <input type='button' value='F' id='bt14'/> <input type='button' value='G' id='bt15'/> <input type='button' value='H' id='bt16'/> <input type='button' value='J' id='bt17'/> <input type='button' value='K' id='bt18'/> <input type='button' value='L' id='bt19'/> <br /> ... Many thanks in advance!

    Read the article

  • Change the User Interface Language in Ubuntu

    - by Matthew Guay
    Would you like to use your Ubuntu computer in another language?  Here’s how you can easily change your interface language in Ubuntu. Ubuntu’s default install only includes a couple languages, but it makes it easy to find and add a new interface language to your computer.  To get started, open the System menu, select Administration, and then click Language Support. Ubuntu may ask if you want to update or add components to your current default language when you first open the dialog.  Click Install to go ahead and install the additional components, or you can click Remind Me Later to wait as these will be installed automatically when you add a new language. Now we’re ready to find and add an interface language to Ubuntu.  Click Install / Remove Languages to add the language you want. Find the language you want in the list, and click the check box to install it.  Ubuntu will show you all the components it will install for the language; this often includes spellchecking files for OpenOffice as well.  Once you’ve made your selection, click Apply Changes to install your new language.  Make sure you’re connected to the internet, as Ubuntu will have to download the additional components you’ve selected. Enter your system password when prompted, and then Ubuntu will download the needed languages files and install them.   Back in the main Language & Text dialog, we’re now ready to set our new language as default.  Find your new language in the list, and then click and drag it to the top of the list. Notice that Thai is the first language listed, and English is the second.  This will make Thai the default language for menus and windows in this account.  The tooltip reminds us that this setting does not effect system settings like currency or date formats. To change these, select the Text Tab and pick your new language from the drop-down menu.  You can preview the changes in the bottom Example box. The changes we just made will only affect this user account; the login screen and startup will not be affected.  If you wish to change the language in the startup and login screens also, click Apply System-Wide in both dialogs.  Other user accounts will still retain their original language settings; if you wish to change them, you must do it from those accounts. Once you have your new language settings all set, you’ll need to log out of your account and log back in to see your new interface language.  When you re-login, Ubuntu may ask you if you want to update your user folders’ names to your new language.  For example, here Ubuntu is asking if we want to change our folders to their Thai equivalents.  If you wish to do so, click Update or its equivalents in your language. Now your interface will be almost completely translated into your new language.  As you can see here, applications with generic names are translated to Thai but ones with specific names like Shutter keep their original name. Even the help dialogs are translated, which makes it easy for users around to world to get started with Ubuntu.  Once again, you may notice some things that are still in English, but almost everything is translated. Adding a new interface language doesn’t add the new language to your keyboard, so you’ll still need to set that up.  Check out our article on adding languages to your keyboard to get this setup. If you wish to revert to your original language or switch to another new language, simply repeat the above steps, this time dragging your original or new language to the top instead of the one you chose previously. Conclusion Ubuntu has a large number of supported interface languages to make it user-friendly to people around the globe.  And since you can set the language for each user account, it’s easy for multi-lingual individuals to share the same computer. Or, if you’re using Windows, check out our article on how you can Change the User Interface Language in Vista or Windows 7, too! Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyChange the User Interface Language in Vista or Windows 7Create a Samba User on UbuntuInstall Samba Server on UbuntuSee Which Groups Your Linux User Belongs To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • User Lockout & WLST

    - by Bala Kothandaraman
    WebLogic server provides an option to lockout users to protect accounts password guessing attack. It is implemented with a realm-wide Lockout Manager. This feature can be used with custom authentication provider also. But if you implement your own authentication provider and wish to implement your own lockout manager that is possible too. If your domain is configured to use the user lockout manager the following WLST script will help you to: - check whether a user is locked using a WLST script - find out the number of locked users in the realm #Define constants url='t3://localhost:7001' username='weblogic' password='weblogic' checkuser='test-deployer' #Connect connect(username,password,url) #Get Lockout Manager Runtime serverRuntime() dr = cmo.getServerSecurityRuntime().getDefaultRealmRuntime() ulmr = dr.getUserLockoutManagerRuntime() print '-------------------------------------------' #Check whether a user is locked if (ulmr.isLockedOut(checkuser) == 0): islocked = 'NOT locked' else: islocked = 'locked' print 'User ' + checkuser + ' is ' + islocked #Print number of locked users print 'No. of locked user - ', Integer(ulmr.getUserLockoutTotalCount()) print '-------------------------------------------' print '' #Disconnect & Exit disconnect() exit()

    Read the article

  • What is the data transfer speeds within the disk, to other devices?

    - by Kumar
    I use Debian 6 on a HP Elitbook 6930 with 2Gigs of RAM. I was copying two AVI files, 1.5 GB in total, and noticed that the data copying was done at the rate of 4MB/sec. When I copy same AVIs to my Western Digital Passport 25G USB plugin drive the data transfer speed is 11+MB/sec. This speed is different if I plugin the drive to different USB ports. Interestingly, at work I also downloaded a 16MB IE8 exe on my virtual xp, run inside Oracle Sun Virtual Box, and it was downloaded AND saved within a second. We have a high speed network at work. :-) Why and how this difference in data speeds is possible?

    Read the article

  • Google I/O 2010 - Creating positive user experiences

    Google I/O 2010 - Creating positive user experiences Google I/O 2010 - Beyond design: Creating positive user experiences Tech Talks John Zeratsky, Matt Shobe Good user experience isn't just about good design. Learn how to create a positive user experience by being fast, open, engaged, surprising, polite, and, well... being yourself. Chock full of examples from the web and beyond, this talk is a practical introduction for developers who are passionate about user experience but may not have a background in design. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 185 6 ratings Time: 52:11 More in Science & Technology

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >