Search Results

Search found 84 results on 4 pages for 'claire huang'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Modify an object without using it as parameter

    - by Claire Huang
    I have a global object "X" and a class "A". I need a function F in A which have the ability to modify the content of X. For some reason, X cannot be a data member of A (but A can contain some member Y as reference of X), and also, F cannot have any parameter, so I cannot pass X as an parameter into F. (Here A is an dialog and F is a slot without any parameter, such as accept() ) How can I modify X within F if I cannot pass X into it? Is there any way to let A know that "X" is the object it need to modify?? I try to add something such as SetItem to specify X in A, but failed.

    Read the article

  • How can I use Qt to get html code of the redirected page??

    - by Claire Huang
    I'm trying to use Qt to download the html code from the following url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=nucleotide&cmd=search&term=AB100362 this url will re-direct to www.ncbi.nlm.nih.gov/nuccore/27884304 I try to do it by following way, but I cannot get anything. it works for some webpage such as www.google.com, but not for this NCBI page. is there any way to get this page?? QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; } void GetGi() { int pos; QString sGetFromURL = "http://www.ncbi.nlm.nih.gov/entrez/query.fcgi"; QUrl url(sGetFromURL); url.addQueryItem("db", "nucleotide"); url.addQueryItem("cmd", "search"); url.addQueryItem("term", "AB100362"); QByteArray InfoNCBI; int errorCode = downloadURL(url, InfoNCBI); if (errorCode != 0 ) { QMessageBox::about(0,tr("Internet Error "), tr("Internet Error %1: Failed to connect to NCBI.\t\nPlease check your internect connection.").arg(errorCode)); return "ERROR"; } }

    Read the article

  • QtCreator on linux: 32-bits vs. 64-bits.

    - by Claire Huang
    My laptop is 64-bits, so when I start to use Qt, I chose 64-bit QtCreator. Now I'm facing a problem, I wish that the executable files I generated are runnnable on 32-bit linux system. Can I set QtCreator to generate 32-bit executable files? So that I can decide I want to generate 32-bit ones or 64-bit ones. I don't want to install another 32-bit QtCreator <.

    Read the article

  • How to set QNetworkReply properties to get correct NCBI pages?

    - by Claire Huang
    I try to get this following url using the downloadURL function: http://www.ncbi.nlm.nih.gov/nuccore/27884304 But the data is not as what we can see through the browser. Now I know it's because that I need to give the correct information such as browser, how can I know what kind of information I need to set, and how can I set it? (By setHeader function??) In VC++, we can use CInternetSession and CHttpConnection Object to get the correct information without setting any other detail information, is there any similar way in Qt or other cross-platform C++ network lib?? (Yes, I need the the cross-platform property.) QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); request.setHeader(QNetworkRequest::ContentTypeHeader ,"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)"); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); QVariant statusCodeV = reply->attribute(QNetworkRequest::RedirectionTargetAttribute); QUrl redirectTo = statusCodeV.toUrl(); if (!redirectTo.isEmpty()) { if (redirectTo.host().isEmpty()) { const QByteArray newaddr = ("http://"+url.host()+redirectTo.encodedPath()).toAscii(); redirectTo.setEncodedUrl(newaddr); redirectTo.setHost(url.host()); } return (downloadURL(redirectTo, data)); } if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; }

    Read the article

  • Cannot run an executable binary file on another Linux System??

    - by Claire Huang
    I'm using Ubuntu 10.04 and Qt4.6, and I've created an executable binary file on my own computer through QtCreator. Now I want to put my executable file on CentOS 5, but it seems that this executable file cannot run on CentOS. Do I need to set some compile parameters to make it runnable on Linux other than Ubuntu?? Or do I need to put some lib files with the executable binary file? (For windows, the .exe file should put together with some .dll files to provide the correct dynamic lib linkage, is there some similar problem on linux?) Thanks for your help!

    Read the article

  • How can I set QNetworkReply properties to get correct NCBI pages?

    - by Claire Huang
    I try to get this following url using the downloadURL function: http://www.ncbi.nlm.nih.gov/nuccore/27884304 But the data is not as what we can see through the browser. Now I know it's because that I need to give the correct information such as browser, how can I know what kind of information I need to set, and how can I set it? (By setHeader function??) In VC++, we can use CInternetSession and CHttpConnection Object to get the correct information without setting any other detail information, is there any similar way in Qt or other cross-platform C++ network lib?? (Yes, I need the the cross-platform property.) QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); request.setHeader(QNetworkRequest::ContentTypeHeader ,"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)"); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); int direction; QVariant statusCodeV = reply->attribute(QNetworkRequest::RedirectionTargetAttribute); QUrl redirectTo = statusCodeV.toUrl(); if (!redirectTo.isEmpty()) { if (redirectTo.host().isEmpty()) { const QByteArray newaddr = ("http://"+url.host()+redirectTo.encodedPath()).toAscii(); redirectTo.setEncodedUrl(newaddr); redirectTo.setHost(url.host()); } return (downloadURL(redirectTo, data)); } if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; }

    Read the article

  • Can I un-check a group of RadioBottoms inside a group box?

    - by Claire Huang
    radio bottoms inside a group Box will be treated as a group of bottoms. They are mutual exclusive. How can I clean up their check states?? I have several radio bottoms, one of them are checked. How can I "clean" (uncheck) all radio bottoms?? "setChecked" doesn't work within a group, I tried to do following things but failed. My code is as following, radioButtom is inside a groupBox, and I want to unchecked it. The first setChecked does works, but the second one doesn't, the radioBottom doesn't been unchecked MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { QRadioButton *radioButton; ui->setupUi(this); radioButton->setChecked(true); radioButton->setChecked(false); } Where is the problem in my code?

    Read the article

  • Using a 64bit Linux kernel, can't see more than 4GB of RAM in /proc/meminfo

    - by Chris Huang-Leaver
    I'm running my new computer which has 8GB of RAM installed, which is visable from BIOS page, does not show in /proc/meminfo uname -a Linux localhost 3.0.6-gentoo #2 SMP PREEMPT Sat Nov 19 10:45:22 GMT-- x86_64 AMD Phenom(tm) II X4 955 Processor AuthenticAMD GNU/Linux The result of /proc/meminfo is as follows: (thans Andrey) MemTotal: 4021348 kB MemFree: 1440280 kB Buffers: 23696 kB Cached: 1710828 kB SwapCached: 4956 kB Active: 1389904 kB Inactive: 841364 kB Active(anon): 1337812 kB Inactive(anon): 714060 kB Active(file): 52092 kB Inactive(file): 127304 kB Unevictable: 32 kB Mlocked: 32 kB SwapTotal: 8388604 kB SwapFree: 8047900 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 492732 kB Mapped: 47528 kB Shmem: 1555120 kB Slab: 267724 kB SReclaimable: 177464 kB SUnreclaim: 90260 kB KernelStack: 1176 kB PageTables: 12148 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 10399276 kB Committed_AS: 3293896 kB VmallocTotal: 34359738367 kB VmallocUsed: 317008 kB VmallocChunk: 34359398908 kB AnonHugePages: 120832 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 23552 kB DirectMap2M: 3088384 kB DirectMap1G: 1048576 kB I have tried using mem=8G as a kernel boot parameter, I read a post about setting HIGHMEM64G to yes, before realising that only applies to 32bit kernels. Trying dmindecode -t memory SMBIOS 2.7 present. Handle 0x0026, DMI type 16, 23 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Multi-bit ECC Maximum Capacity: 32 GB Error Information Handle: Not Provided Number Of Devices: 4 Handle 0x0028, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: DIMM0 Bank Locator: BANK0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz Manufacturer: Manufacturer0 Serial Number: SerNum0 Asset Tag: AssetTagNum0 Part Number: Array1_PartNumber0 Rank: Unknown Handle 0x002A, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: Unknown Data Width: 64 bits Size: No Module Installed Form Factor: DIMM Set: None Locator: DIMM1 Bank Locator: BANK1 Type: Unknown Type Detail: Synchronous Speed: Unknown Manufacturer: Manufacturer1 Serial Number: SerNum1 Asset Tag: AssetTagNum1 Part Number: Array1_PartNumber1 Rank: Unknown Handle 0x002C, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: DIMM2 Bank Locator: BANK2 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz Manufacturer: Manufacturer2 Serial Number: SerNum2 Asset Tag: AssetTagNum2 Part Number: Array1_PartNumber2 Rank: Unknown Handle 0x002E, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: Unknown Data Width: 64 bits Size: No Module Installed Form Factor: DIMM Set: None Locator: DIMM3 Bank Locator: BANK3 Type: Unknown Type Detail: Synchronous Speed: Unknown Manufacturer: Manufacturer3 Serial Number: SerNum3 Asset Tag: AssetTagNum3 Part Number: Array1_PartNumber3 Rank: Unknown

    Read the article

  • Map Caps-Lock to Control in Windows 8.1

    - by Eric Huang
    Before the Windows 8.1 update, I was able to map Caps-Lock to Controls through the type of registry tweak in this post: Remapping a keyboard key in windows 8.1 However, after updating to 8.1, my tweak no longer works. What I had done was Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout] "Scancode Map"=hex:00,00,00,00,00,00,00,00,02,00,00,00,1d,00,3a,00,00,00,00,00 Windows 8.1 may have changed how it interprets the keyboard layout registry, I'm guessing. I'm an avid emacs user, so this problem is a life-or-death scenario for me.

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • How could Load average numbers from 'htop' exceed 100% CPU utlization?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • bash disable line wrap without truncation

    - by Eric Huang
    I am using a template heavy library in c++ and need to understand the template errors. Reading line wrapped template errors is a serious pain. Is there a way to disable line wrapping in bash without also truncating the output. Additionally, is there a way to do horizontal scrolling on the output. I have seen this answer, how to make bash not to wrap output?, but the output is truncated. The solution doesn't have to be bash targeted, if there is method for this using another shell, tmux, piping make output to another program, compiling from within vim, etc, I'll use it. (Except for copy-pasting into gedit)

    Read the article

  • How to setup server to accept pem(private RSA key) login w/o password like EC2?

    - by Chandler.Huang
    I am manage a group of VM and I need to setup all vm create a ssh tunnel to a specific host A. One way to do this is append public key of each VM to host's authorized_keys, but I guess I have to do the append each time i create a VM. So I am trying to config host A to accept pem or private key login without passowrd, just like EC2, client can use "ssh -i PEM" to login host A. But I have tried in vain for hours. I create a rsa public/private key and let VM use the private key to login, no matter what I do, host a still ask for password. Is there anything I missed ? Thanks.

    Read the article

  • Does it make a difference to read from a file instead of from MySQL?

    - by Joe Huang
    My web server currently is quite loaded. And I have a PHP file that is accessed very often remotely. The PHP file basically makes a MySQL query and returns a JSON formatted string. I am thinking to use a Cron job to write the necessary data into a file every 15 mins, so the PHP file doesn't make a MySQL query, instead it reads from the file. Does it make a difference? I mean to alleviate the server loading (CPU/MySQL) a bit?

    Read the article

  • Responsive Web design d'Ethan Marcotte, critique par Ihèb BEN ROMDHANE

    Bonjour, Je vous propose la critique du Livre Responsive Web design de Ethan Marcotte. Citation: Encore un très bon ouvrage dans l'excellente collection A book Apart, dans lequel Ethan Marcotte aborde, de manière très claire et argumentée, les différents aspects qui mènent à la création d'une mise en page fluide et responsive. Les exemples présentés s'appuient sur le principe des grilles de m...

    Read the article

  • Oracle OpenWorld Series: All Things Mobile

    - by Michelle Kimihira
    I caught up with Joe Huang, Senior Principal Product Manager, Mobile Application Development Framework to hear about his recommendations for Oracle OpenWorld. Use this Focus On document, which provides a roadmap to must-attend sessions and demos. By Joe Huang This year’s OpenWorld promises to be “THE” event for anyone interested in mobile enterprise applications.  Although Oracle has had a rich portfolio of mobile products for many years now, there is a much stronger focus on mobile this year.  Every single one of our customers is looking to develop a mobile strategy and bring key business processes to mobile users, and as you will see in the various keynotes, sessions, and demos during OpenWorld, Oracle is clearly the leader in mobile technologies and applications. Look for mobile development technologies being demonstrated in the Oracle Red Lounge located at Moscone North Upper Lobby, where innovative technologies from Oracle are being showcased.  A few select sessions where mobile development technologies will be highlighted: Monday, 10/1 10:45 AM – 11:45 AM GEN9398: The Future Development for Oracle Fusion – From Desktop to Mobile to Cloud See the latest and greatest in Oracle development technologies.  A key customer will be demonstrating the application they built using beta version of ADF Mobile. Marriott Marquis, Salon 8 Monday, 10/1 1:45 PM – 2:45 PM GEN11554: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies – See how to leverage Oracle’s development technology like ADF Mobile to mobilize Oracle applications. Moscone West, 3002/3004 Monday, 10/1 4:45 PM – 5:45 PM GEN11451: Building a Mobile Applications with Oracle Cloud See how Oracle offers a simpler way of developing and deploying cross-device mobile applications, enabling you to access applications, data and services from mobile channels in an easier way. Moscone West, 2002/2004 Tuesday, 10/2 11:45 AM – 12:45 PM CON3824: Mobile-Enabled Oracle Fusion Middleware and Enterprise Applications with Oracle ADF See how Oracle Fusion Middleware and ADF Mobile together delivers a complete and powerful platform for enterprise mobile applications.  A key customer will also be demonstrating a application built using ADF Mobile beta, that extends Oracle application to mobile devices. Moscone South, 306 Additional Information ·         Relevant Blogs: Oracle OpenWorld Countdown Begins ,  Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications, Amit Zavery’s General Session, Hassan Rizvi's General Session, Oracle OpenWorld Blog ·         Focus On Docs: Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications,  Mobile ·         Product Information on Oracle.com: Oracle Fusion Middleware ·         Subscribe to our regular Fusion Middleware Newsletter ·         Follow us on Twitter and Facebook

    Read the article

  • Content API for Shopping Office Hours - June 12, 2012

    Content API for Shopping Office Hours - June 12, 2012 Hangout discussing Product Listing Ads (PLAs) and the Google Affiliate Network (GAN) with guest Mark Coppin (GAN) and Claire Hugo (PLAs) of Google. In the Hangout, we reference the video "How to create a new Product Listing Ads campaign" (www.youtube.com which can be found in the Getting Starting page on the Shopping/Ads integration site (www.google.com Also, check out the GAN site to learn more: www.google.com From: GoogleDevelopers Views: 703 6 ratings Time: 31:23 More in Science & Technology

    Read the article

  • How do I troubleshoot a problem syncing Google contacts to an iPad?

    - by Daryl Spitzer
    I use my MacBook Pro to sync content onto my wife's iPad. (She doesn't have a computer.) She doesn't want all my contacts from the Address Book app on my MBP. But she does want her Google contacts on her iPad. I've tried the following settings in iTunes: I created a group in my address book called "Claire's" (and put just a couple contacts in it), since if one enables "Sync Address Book Contacts" one either has to select "All" or at least one group. I've double-checked her email address in the dialog that comes up after pressing the "Configure" button. But after syncing, only the couple contacts in the "Claire's" group are in the Contacts app on her iPad. I've checked her Google contacts, and she has over 2000. For some reason they're not syncing. How do I find out why they're not? I looked to see if I could just use an app to do the sync on the iPad, but couldn't find one with good ratings. Do you have one to recommend so I can give up struggling with getting this working in iTunes?

    Read the article

  • Error about 'invalid JSON' with couchDB view but the json's fine...

    - by Chris Huang-Leaver
    I am trying to setup the following view on CouchDB { "_id":"_design/id", "_rev":"1-9be2e55e05ac368da3047841f301203d", "language":"javascript", "views":{ "by_id":{ "map" : "function(doc) { emit(doc.id, doc)}" },"from_user_id":{ "map" : "function(doc) { if (doc.from_user_id) {emit(doc.from_user_id, doc)}}"}, "from_user":{ "map" : "function(doc) { if (doc.from_user) {emit(doc.from_user, doc)}}"}, "to_user_id":{ "map" : "function(doc) {if (doc.to_user_id){ emit(doc.to_user_id, doc)}}"}, "to_user":{ "map" : "function(doc) {if (doc.to_user){ emit(doc.to_user, doc)}}" }, "max_id":{ "map" : "function(doc) { if (doc.id) {emit(doc._id, eval(doc.id))}}", "reduce" :"function(key,value) { a = value[0]; for (i=1; i <value.length; ++i){a = Math.max(a,value[i])} return a}" } } } when I try to 'PUT' this using curl: curl -X PUT -d keys.json $CDB/_design/id {"error":"bad_request","reason":"invalid UTF-8 JSON"} I know it's not invalid JSON, because I tested it using the 'json' library built into Python 2.6, it loads fine. JS screw ups give me the error 'must evaluate to a function' What else might be wrong with it?

    Read the article

  • the Memory problem about MySQL "SELECT *"

    - by Austin Huang
    Dear all: I'm new to MySQL, and I have a question about the memory. I have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the memory. I use python(actually MySQLdb in python) with sql: SELECT * FROM table. However, from my linux "top" I saw this python process uses 50% of my memory(which is total 6GB) I'm curious about why it uses about 3GB memory only for a 200 mb table. Thanks in advance!

    Read the article

  • Reading Xml with XmlReader in C#

    - by Gloria Huang
    I'm trying to read the following Xml document as fast as I can and let additional classes manage the reading of each sub block. <ApplicationPool><Accounts><Account><NameOfKin></NameOfKin><StatementsAvailable><Statement></Statement></StatementsAvailable></Account></Accounts></ApplicationPool> I can't seem to format the above nicely :( However, I'm trying to use the XmlReader object to read each Account and subsequently the "StatementsAvailable". Do you suggest using XmlReader.Read and check each element and handle it? I've thought of seperating my classes to handle each node properly. So theres an AccountBase class that accepts a XmlReader instance that reads the NameOfKin and several other properties about the account. Then I was wanting to interate through the Statements and let another class fill itself out about the Statement (and subsequently add it to an IList). Thus far I have the "per class" part done by doing XmlReader.ReadElementString() but I can't workout how to tell the pointer to move to the StatementsAvailable element and let me iterate through them and let another class read each of those proeprties. Sounds easy!

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • ASP.NET MVC solution to a forms application?

    - by Gloria Huang
    Hello, We're building a survey system and utilising ASP.NET MVC and wondered if anyone can offer suggestions on the architecture. Here's the problem we're trying to solve. Essentially an agency sends out several surveys every year. They're very structured and not like SurveyMonkey style of surveys - they're actually applications of feedback. Much like a Visa Application there are lots of things they need to do and sometimes it takes them 2-3 weeks to fill it out. They can upload files (proofs of purchase etc - PDF/JPG) and also multiple "items". Eg. Say for instance they've worked for McDonalds, there could be 20 different franchises, they build a list of locations they've worked. 3 weeks later there could be another 3 new locations and 2 may have closed down. So we need to ensure the forms are able to handle those situations. The forms themselves (markup and data) change every year - I should mention that this for a taxation/finance/budget system. We were thinking of using MVC, using Xml to store the data (temporarily), XSD to validate the data, XSL to transform the data to presentable markup (for them to fill out) and then once they "Submit" an application it gets stored into the DB in relevant areas. When the user starts the application process, they can save the progress so far (we validate whatever they entred and ignore any they havent), save it as an Xml blob and store in the DB. When they're finally ready to submit it, then we do a full validation and upload the files and store them securely (it has their business proofs and accounting statements) and then run some workflows. What I'm really concerned about is how to manage changing forms versions (a year later). How are form/application systems written these days? We have 2 months to pull this off and about 30 forms to deliver. So 30xXML, 30xXSD, 30xXSL.

    Read the article

< Previous Page | 1 2 3 4  | Next Page >