Search Results

Search found 75615 results on 3025 pages for 'www data'.

Page 21/3025 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Sorting data in the SSIS Pipeline (Video)

    In this post I want to show a couple of ways to order the data that comes into the pipeline.  a number of people have asked me about this primarily because there are a number of ways to do it but also because some components in the pipeline take sorted inputs.  One of the methods I show is visually easy to understand and the other is less visual but potentially more performant.

    Read the article

  • Subdomain redirect to WWW

    - by manix
    I have the domain example.com and the test.example.com running on apache server. For some reason when I try to visit test.example it is redirected to www.test.example and by consequence a Server not found error is displayed in the browser. Both .htaccess (root and subdomain folder) files are empty. Additional facts I have another subdomain xyz.example.com pointed to public_html/xyz directory with some content inside (index.html with "hello world message") and it works fine if I use xyz.example.com instead of www.xyz.example.com. So, can you help me to point to the right direction in order. I have a vps and I am able to change any file if is required. Below you can find my virtual host configuration. <VirtualHost xx.xxx.xxx:80> ServerName test.example.com ServerAlias www.test.example.com DocumentRoot /home/example/public_html/test ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/test.example.com combined CustomLog /usr/local/apache/domlogs/test.example.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ScriptAlias /cgi-bin/ /home/example/public_html/test/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/example/test.example.com/*.conf" </VirtualHost>

    Read the article

  • Application Crash cleared the content of the Folder

    - by Ameya
    Recently while working on the LinuxDC++ over the network the application crashed while downloading files. Now my Downloads folder which had at least 60-80GB of data is completely cleaned but the system is not reporting the available the correct free space. Is there way to restore the contents of the folder only as the solution available are for the whole partition. I just want to recover the contents from one folder.

    Read the article

  • How much information can you mine out of a name?

    - by Finglas Fjorn
    While not directly related to programming, I figured that the programmers on here would be just as curious as I was about this question. Feel free to close the question if it does not meet with the guidelines. A name: first, possibly a middle, and surname. I'm curious about how much information you can mine out of a name, using publicly available datasets. I know that you can get the following with anywhere between a low-high probability (depending on the input) using US census data: 1) Gender. 2) Race. Facebook for instance, used exactly that to find out, with a decent level of accuracy, the racial distribution of users of their site (https://www.facebook.com/note.php?note_id=205925658858). What else can be mined? I'm not looking for anything specific, this is a very open-ended question to assuage my curiousity. My examples are US specific, so we'll assume that the name is the name of someone located in the US; but, if someone knows of publicly available datasets for other countries, I'm more than open to them too. I hope this is an interesting question!

    Read the article

  • What's beyond c,c++ and data structure?

    - by sagacious
    I have learnt c and c++ programming languages.i have learnt data structure too. Now i'm confused what to do next?my aim is to be a good programmer. i want to go deeper into the field of programming and making the practical applications of what i have learnt. So,the question takes the form-what to do next?Or is there any site where i can see advantage of every language with it's features? sorry,if there's any language error and thanks in advance.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • ExpressionEngine Segment Variables Lost on Site Index Page

    - by Jesse Bunch
    Hey Everyone, I've been messing with this for days now and can't seem to figure it out. I am trying to pass a 2nd segment variable to my client's index page. The URL I'm trying is: http://www.compupay.com/site/CSCPA/. The problem is, rather than showing the site's index page with the segment variable of "CSCPA" still in the URL, it shows the index page with no segment variables. Initially, I thought it was a .htaccess problem but I couldn't find anything in it that seemed out of whack. Any ideas? I am posting the .htaccess file so another pair of eyes can see it. Thanks for the help! # -- LG .htaccess Generator Start -- # .htaccess generated by LG .htaccess Generator v1.0.0 # http://leevigraham.com/cms-customisation/expressionengine/addon/lg-htaccess-generator/ # secure .htaccess file <Files .htaccess> order allow,deny deny from all </Files> # Dont list files in index pages IndexIgnore * #URL Segment Support AcceptPathInfo On Options +FollowSymLinks #Redirect old incoming links Redirect 301 /contactus.cfm http://www.compupay.com/about_compupay/contact_us/ Redirect 301 /Internet_Payroll.cfm http://www.compupay.com/payroll_solutions/c/online_payroll/ Redirect 301 /Internet_Payroll_XpressPayroll.cfm http://www.compupay.com/payroll_solutions/xpresspayroll/ Redirect 301 /about_compupay.cfm http://www.compupay.com/about_compupay/news/ Redirect 301 /after_payroll.cfm http://www.compupay.com/after_payroll_solutions/ Redirect 301 /news101507.cfm http://www.compupay.com/about_compupay/news/ Redirect 301 /quote.cfm http://www.compupay.com/payroll_solutions/get_a_free_quote/ Redirect 301 /solution_finder_sm.cfm http://www.compupay.com/ Redirect 301 /state_payroll/mississippi_payroll.cfm http://www.compupay.com/resource_center/state_resources/ Redirect 301 /state_payroll/washington_payroll.cfm http://www.compupay.com/resource_center/state_resources/ #Redirect for old top linked to pages Redirect 301 /Payroll_Services.cfm http://www.compupay.com/payroll_solutions/ Redirect 301 /About_CompuPay.cfm http://www.compupay.com/about_compupay/ Redirect 301 /Partnerships.cfm http://www.compupay.com/business_partner_solutions/ Redirect 301 /about_compupay.cfm?subpage=393 http://www.compupay.com/about_compupay/ Redirect 301 /quote.cfm http://www.compupay.com/payroll_solutions/get_a_free_quote/ Redirect 301 /After_Payroll.cfm http://www.compupay.com/after_payroll_solutions/ Redirect 301 /Accountant_Services.cfm http://www.compupay.com/accountant_solutions/ Redirect 301 /careers/careers_payroll.cfm http://www.compupay.com/about_compupay/careers/ Redirect 301 /Industry_Resources.cfm http://www.compupay.com/resource_center/ Redirect 301 /Client_Resources.cfm http://www.compupay.com/resource_center/client_login/ Redirect 301 /client_resources.cfm?subpage=375 http://www.compupay.com/resource_center/client_login/ Redirect 301 /solution_finder_sm.cfm http://www.compupay.com/payroll_solutions/ Redirect 301 /Internet_Payroll_PowerPayroll.cfm http://www.compupay.com/payroll_solutions/powerpayroll/ Redirect 301 /Payroll_Outsourcing.cfm http://www.compupay.com/payroll_solutions/why_outsource/ Redirect 301 /Phone_Payroll_Fax_Payroll.cfm http://www.compupay.com/payroll_solutions/phone_fax_payroll/ Redirect 301 /contactus.cfm http://www.compupay.com/about_compupay/contact_us/ Redirect 301 /state_payroll/iowa_payroll.cfm http://www.compupay.com/resource_center/state_resources/ Redirect 301 /Construction_Payroll.cfm http://www.compupay.com/payroll_solutions/specialty_payroll/ Redirect 301 /PC_Payroll.cfm http://www.compupay.com/payroll_solutions/c/pc_payroll/ Redirect 301 /state_payroll/washington_payroll.cfm http://www.compupay.com/resource_center/state_resources/ Redirect 301 /Internet_Payroll_XpressPayroll.cfm http://www.compupay.com/payroll_solutions/xpresspayroll/ Redirect 301 /accountant_services.cfm?subpage=404 http://www.compupay.com/accountant_solutions/ Redirect 301 /after_payroll.cfm http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=361 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=362 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=363 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=364 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=365 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=366 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=367 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=368 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=369 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /after_payroll.cfm?subpage=416 http://www.compupay.com/after_payroll_solutions/ Redirect 301 /payload_payroll.cfm http://www.compupay.com/payroll_solutions/payload/ Redirect 301 /payroll_services.cfm?subpage=358 http://www.compupay.com/payroll_solutions/ Redirect 301 /payroll_services.cfm?subpage=399 http://www.compupay.com/payroll_solutions/ Redirect 301 /payroll_services.cfm?subpage=409 http://www.compupay.com/payroll_solutions/ Redirect 301 /payroll_services.cfm?subpage=413 http://www.compupay.com/payroll_solutions/ Redirect 301 /payroll_services.cfm?subpage=418 http://www.compupay.com/payroll_solutions/ Redirect 301 /state_payroll/mississippi_payroll.cfm http://www.compupay.com/resource_center/state_resources/ <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / # Remove the www # RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] # RewriteRule ^ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # Force www RewriteCond %{HTTP_HOST} !^www.compupay.com$ [NC] RewriteRule ^(.*)$ http://www.compupay.com/$1 [R=301,L] # Add a trailing slash to paths without an extension RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]{1,5}|/)$ RewriteRule ^(.*)$ $1/ [L,R=301] #Legacy Partner Link Redirect RewriteCond %{QUERY_STRING} partnerCode=(.*) [NC] RewriteRule compupay_payroll.cfm site/%1? [R=301,L] # Catch any remaining requests for .cfm files RewriteCond %{REQUEST_URI} \.cfm RewriteRule ^.*$ http://www.compupay.com/ [R=301,L] #Expression Engine RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php?/$1 [L] AcceptPathInfo On </IfModule> # Remove IE image toolbar <FilesMatch "\.(html|htm|php)$"> Header set imagetoolbar "no" </FilesMatch> # enable gzip compression <FilesMatch "\.(js|css|php)$"> SetOutputFilter DEFLATE </FilesMatch> #Deal with ETag <IfModule mod_headers.c> <FilesMatch "\.(ico|flv|jpg|jpeg|png|gif)$"> Header unset Last-Modified </FilesMatch> <FilesMatch "\.(ico|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header unset ETag FileETag None Header set Cache-Control "public" </FilesMatch> </IfModule> <IfModule mod_expires.c> <FilesMatch "\.(ico|flv|jpg|jpeg|png|gif|css|js)$"> ExpiresActive on ExpiresDefault "access plus 1 year" </FilesMatch> </IfModule> #Force Download PDFs <FilesMatch "\.(?i:pdf)$"> ForceType application/octet-stream Header set Content-Disposition attachment </FilesMatch> #Increase Upload Size php_value upload_max_filesize 5M php_value post_max_size 5M # -- LG .htaccess Generator End --

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • How can I get WWW-Mechanize to login to Wells Fargo's website?

    - by J Miller
    I am trying to use Perl's WWW::Mechanize to login to my bank and pull transaction information. After logging in through a browser to my bank (Wells Fargo), it briefly displays a temporary web page saying something along the lines of "please wait while we verify your identity". After a few seconds it proceeds to the bank's webpage where I can get my bank data. The only difference is that the URL contains several more "GET" parameters appended to the URL of the temporary page, which only had a sessionID parameter. I was able to successfully get WWW::Mechanize to login from the login page, but it gets stuck on the temporary page. There is a <meta http-equiv="Refresh"... tag in the header, so I tried $mech->follow_meta_redirect but it didn't get me past that temporary page either. Any help to get past this would be appreciated. Thanks in advance. Here is the barebones code that gets me stuck at the temporary page: #!/usr/bin/perl -w use strict; use WWW::Mechanize; my $mech = WWW::Mechanize->new(); $mech->agent_alias( 'Linux Mozilla' ); $mech->get( "https://www.wellsfargo.com" ); $mech->submit_form( form_number => 2, fields => { userid => "$userid", password => "$password" }, button => "btnSignon" );

    Read the article

  • why including www in a jquery load causes it to fail?

    - by zenna
    Could someone enlighten me as to why including www in a ajax request causes it to fail. i.e. This works: $('#mydiv').load('http://mydomain.com/getitems'); But this doesn't (returns nothing) $('#mydiv').load('http://www.mydomain.com/getitems'); Note that www.mydomain.com/getitems is a valid domain, in the sense that if I point my web browser to it I am able to load the page.

    Read the article

  • What is the preferred method of accessing WWW::Mechanize responses?

    - by sid_com
    Hello! Are both of these versions OK or is one of them to prefer? #!/usr/bin/env perl use strict; use warnings; use WWW::Mechanize; my $mech = WWW::Mechanize->new(); my $content; # 1 $mech->get( 'http://www.kernel.org' ); $content = $mech->content; say $content; # 2 my $res = $mech->get( 'http://www.kernel.org' ); $content = $res->content; say $content;

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • Javascript: Safely upload a client data file

    - by Jeffrey Sweeney
    I'm (still) working on a template-based XML editing program. It's a GUI-based XML editor that only allows users to add certain tags and attributes based off the requirements. You can see the current version here for an idea. Now, I'd like to allow users to upload their own data templates, but I'm concerned about potential XSS hacks. Currently, the template file is in Javascript object literal notation, which unsurprisingly is a security nightmare if the user can upload their own. I was thinking of using XML instead, but is there an even better alternative?

    Read the article

  • Reuse the data CRUD methods in data access layer, but they are updated too quickly

    - by ValidfroM
    I agree that we should put CRUD methods in a data access layer, However, in my current project I have some issues. It is a legacy system, and there are quite a lot CRUD methods in some concrete manager classes. People including me seem to just add new methods to it, rather than reuse the existing methods. Because We don't know whether the existing method is what we need Even if we have source code, do we really need read other's code then make decision? It is updated too quickly. Do not have time get familiar with the DAO API. Back to the question, how do you solve that in your project? If we say "reuse", it really needs to be reusable rather than just an excuse.

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • Mount external HD ubuntu 12.10

    - by Luigi Tiburzi
    Although it's an abundantly treated matter, I'm unable to find an answer valid for my needs. I had a 12.04 installation of ubuntu and I decided to install the 12.10. I copied (using GParted) the partition where my system was to an external hd where there is a windows partition. Then I installed the newest ubuntu version and now I want to take back some files (for example my .emacs) from that partition but when I try to mount it, it is not found as sdb and if I mount it from /dev/usb/hddev0 I don't get any output, only a blinking cursor, no errors, no output. I even tried to mount it as an ntfs disk but the result was the same. It's like the hd cannot be detected. So how can I access data to that disk? Could I get them from GParted terminal instead of Ubuntu one? Thanks

    Read the article

  • Consolidating hotels data from various booking sites with different IDs or reference

    - by Victor
    In one of my projects, I have data for hotels, and other booking sites are able to book this hotel. For example: Hotel A - Booking (ID = 4002), Expedia (ID = 123), Priceline (ID = 147) The three booking engines each uses their own Id to reference to Hotel A. I would need to check manually and make the right reference to the hotel. If I have 100,000 hotels, I have to check manually 300,000 (considering 3 booking sites) times? They might provide API, then I can cross check the name, address or latitude/longitude, but if they differ a little bit then I might give the wrong reference to the wrong hotel. I'm sure there are better ways to do this. There are many travel sites out there which do hotel price checking on many booking sites, but how do they do to make sure they are checking the right hotel on these booking sites? Anyone has any experience on this?

    Read the article

  • disk not accessible

    - by user107044
    i formatted my hard drive yesterday and it was working well even after the formatting. But when I restarted my system again , is is showing that the space is alloted to my files but they are inaccessible. I have even tried to unhide the files and folders, if they got hidden somehow. But nothing works. the hard drive is being shown empty but the properties are saying that it still conatins the data : http://imgur.com/ObjTE in the image, it is showing that the directory has only 1 file of size:4.8 kbps but the space being used by the drive is 11.6 GB. do suggest some solution.

    Read the article

  • Using Ubuntu to recover data from a crashed Windows install

    - by user289391
    I was using Windows on my laptop when suddenly the blue screen of death appeared and then laptop restarted and wrote for me this : Intel UNDI, PXE-2.1 (built 083) Copyright (C) 1997-200 Intel Corporation This Product is covered by one or more of the following patents: US5,307459, US5,434,872, US5732,094, US6579,884, US6115,776 and US6,327,625 Realtek PCIe FE Family Controller Series v120 (01/26/10) PXE-M0F: ExitingPXEROM. reboot failed I have Ubuntu on an external disk so I have now booted to that. Two questions: Any theories on what happened? How can I use Ubuntu to recovers my data from Windows install?

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >