Search Results

Search found 64995 results on 2600 pages for 'data import'.

Page 807/2600 | < Previous Page | 803 804 805 806 807 808 809 810 811 812 813 814  | Next Page >

  • trouble to connect with AppStore in my InAppPurchase application(iPhone)

    - by riteshkumar1905
    There is problem to connect AppStore in my application. All things run fine in Simulator.But When i go with iPhone then AppStore is not connected.. I am also enclose the code which i call on button....... import "BuyController.h" import "InAppPurchaseManager.h" import "SKProducts.h" define kInAppPurchaseProUpgradeProductId @"com.vigyaapan.iWorkOut1" @implementation BuyController (IBAction)buy:(id)sender { /* get the product description (defined in early sections)*/ //[self requestProUpgradeProductData]; { if ([SKPaymentQueue canMakePayments]) { InAppPurchaseManager *Observer = [[InAppPurchaseManager alloc] init]; [[SKPaymentQueue defaultQueue] addTransactionObserver:Observer]; //NSURL *sandboxStoreURL = [[NSURL alloc]initWithString:@"http://sandbox.itunes.apple.com/verifyReceipt"]; //[[UIApplication sharedApplication]openURL:[NSURL URLWithString:@"http://sandbox.itunes.apple.com"]]; [[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"http://phobos.apple.com/WebObjects/ com.vigyaapan.iWorkOut1?id=9820091347&;amp;amp;amp;amp;mt=8"]]; //[[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"http://phobos.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=301349397&;amp;amp;amp;amp;mt=8"]]; SKPayment *payment = [SKPayment paymentWithProductIdentifier:@"com.vigyaapan.iWorkOut1"]; [[SKPaymentQueue defaultQueue] addPayment:payment]; } else { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"MyApp" message:@"You are not authorized to purchase from AppStore" delegate:self cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; [alert release]; } //return [SKPaymentQueue canMakePayments]; } SKPayment *payment = [SKPayment paymentWithProductIdentifier:kInAppPurchaseProUpgradeProductId]; [[SKPaymentQueue defaultQueue] addPayment:payment]; //[self requestProUpgradeProductData]; /* get the product description (defined in early sections)*/ } /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle )nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; }/ // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } (void)dealloc { [super dealloc]; } @end

    Read the article

  • Using java.util.logging, is it possible to restart logs after a certain period of time?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • GWT combobox not displaying correctly

    - by James
    Hi, I am using GWT with GWT-EXT running in glassfish. I create 2 combo boxes as follows: import com.extjs.gxt.ui.client.widget.form.ComboBox; import com.extjs.gxt.ui.client.widget.form.SimpleComboBox; this.contentPanel = new ContentPanel(); this.contentPanel.setFrame(true); this.contentPanel.setSize((int)(Window.getClientWidth()*0.95), 600); this.contentPanel.setLayout(new FitLayout()); initWidget(this.contentPanel); SimpleComboBox<String> combo = new SimpleComboBox<String>(); combo.setEmptyText("Select a topic..."); combo.add("String1"); combo.add("String2"); this.contentPanel.add(combo); ComboBox combo1 = new ComboBox(); combo1.setEmptyText("Select a topic..."); ListStore topics = new ListStore(); topics.add("String3"); topics.add("String4"); combo.setStore(topics); this.contentPanel.add(combo1); When these are loaded in the browser (IE 8.0, Firefox 3.6.6 or Chrome 10.0) the combo boxes are shown but don't have the pull down arrow. They look like a text field with the "Select a topic..." text. When you select the text it disappears and if you type a character and then delete it the options are shown (i.e. pull down is invoked) however, there is still no pull down arrow. Does anyone know what the issue might be? Or how I can investigate further? Is it possible to see the actual HTML the browser is getting, when I View Page Source I only get the landing page HTML. As an additional I also have a import com.google.gwt.user.client.ui.Grid that does not render correctly. It is in table format but has no grid lines or header bar etc. Cheers, James

    Read the article

  • Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be

    - by Mikey John
    Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be done ? Here's my stored procedure: set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[AddPerson] @Data AS XML AS INSERT INTO Persons (name,image_binary) SELECT rowWals.value('./@Name', 'varchar(64)') AS [Name], rowWals.value('./@ImageBinary', 'varbinary(MAX)') AS [ImageBinary] FROM @Data.nodes ('/Data/Names') as b(rowVals) SELECT SCOPE_IDENTITY() AS Id In my schema Name is of type String and ImageBinary is o type byte[].

    Read the article

  • Calling Web Services Asynchronously in Page_Load Event

    - by Umar Siddique
    I'm working on a web application using VB.NET. In page load event am calling a remote web service which take time to bring the data. During this process none of the other contents on page are shown(render). I want to call this remote web service asynchronously so that other data of page is displayed and web service data will be displayed when its available.

    Read the article

  • Python Imaging: YCbCr problems

    - by daver
    Hi, I'm doing some image processing in Python using PIL, I need to extract the luminance layer from a series of images, and do some processing on that using numpy, then put the edited luminance layer back into the image and save it. The problem is, I can't seem to get any meaningful representation of my Image in a YCbCr format, or at least I don't understand what PIL is giving me in YCbCr. PIL documentation claims YCbCr format gives three channels, but when I grab the data out of the image using np.asarray, I get 4 channels. Ok, so I figure one must be alpha. Here is some code I'm using to test this process: import Image as im import numpy as np pengIm = im.open("Data\\Test\\Penguins.bmp") yIm = pengIm.convert("YCbCr") testIm = np.asarray(yIm) grey = testIm[:,:,0] grey = grey.astype('uint8') greyIm = im.fromarray(grey, "L") greyIm.save("Data\\Test\\grey.bmp") I'm expecting a greyscale version of my image, but what I get is this jumbled up mess: http://i.imgur.com/zlhIh.png Can anybody explain to me where I'm going wrong? The same code in matlab works exactly as I expect.

    Read the article

  • Static nested class visibility issue with Scala / Java interop

    - by Matt R
    Suppose I have the following Java file in a library: package test; public abstract class AbstractFoo { protected static class FooHelper { public FooHelper() {} } } I would like to extend it from Scala: package test2 import test.AbstractFoo class Foo extends AbstractFoo { new AbstractFoo.FooHelper() } I get an error, "class FooHelper cannot be accessed in object test.AbstractFoo". (I'm using a Scala 2.8 nightly). The following Java compiles correctly: package test2; import test.AbstractFoo; public class Foo2 extends AbstractFoo { { new FooHelper(); } } The Scala version also compiles if it's placed in the test package. Is there another way to get it to compile?

    Read the article

  • Group variables in a boxplot in R

    - by tao.hong
    I am trying to generate a boxplot whose data come from two scenarios. In the plot, I would like to group boxes by their names (So there will be two boxes per variable). I know ggplot would be a good choice. But I got errors which I could not figure out. Can anyone give me some suggestions? sensitivity_out1 structure(c(0.0522902104339716, 0.0521369824334004, 0.0520240345973737, 0.0519818337359876, 0.051935071418996, 0.0519089404325544, 0.000392698277338341, 0.000326135474295325, 0.000280863338343747, 0.000259631566041935, 0.000246594043996332, 0.000237923540393391, 0.00046732650331544, 0.000474448907808135, 0.000478287273678457, 0.000480194683464109, 0.000480631753078668, 0.000481760272726273, 0.000947965771207979, 0.000944821699830455, 0.000939631071343889, 0.000937186900570605, 0.000936007346568281, 0.000934756220144141, 0.00132442589501872, 0.00132658367774979, 0.00133334696220742, 0.00133622384928092, 0.0013381577476241, 0.00134005741746304, 0.0991622968751298, 0.100791399440082, 0.101946808417405, 0.102524244727408, 0.102920085260477, 0.103232984259916, 0.0305219507186844, 0.0304635269233494, 0.0304161055015213, 0.0303742106794513, 0.0303381888169022, 0.0302996157711171, 1.94268588634518e-05, 2.23991225564447e-05, 2.5756135487907e-05, 2.79997917298194e-05, 3.00753967077715e-05, 3.16270817369878e-05, 0.544701146678523, 0.542887331601984, 0.541632986366816, 0.541005610554556, 0.540617004208336, 0.540315690692195, 0.000453386694666078, 0.000448473414508756, 0.00044692043197248, 0.000444826296854332, 0.000445747996014684, 0.000444764303682453, 0.000127569551159321, 0.000128422491392669, 0.00012933662856487, 0.000129941842982939, 0.000129578971489026, 0.000131113075233758, 0.00684610571790029, 0.00686349387897349, 0.00687468164010565, 0.00687880720347743, 0.00688275579317197, 0.00687822247621936), .Dim = c(6L, 12L)) out2 structure(c(0.0189965816735366, 0.0189995096225103, 0.0190099362589894, 0.0190033523148514, 0.01900896721937, 0.0190099427513381, 0.00192043989797585, 0.00207303208721059, 0.00225931163225165, 0.0024049969048389, 0.00252310364086785, 0.00262940166568126, 0.00195164921633517, 0.00190079923515755, 0.00186139563778548, 0.00184188171395076, 0.00183248544676564, 0.00182492970673969, 1.83038731485927e-05, 1.98252671720347e-05, 2.14794764479231e-05, 2.30713122969332e-05, 2.4484220713564e-05, 2.55958833705284e-05, 0.0428066864455102, 0.0431686808647809, 0.0434411033615353, 0.0435883377765726, 0.0436690169266633, 0.0437340464360965, 0.145288252474567, 0.141488776430307, 0.138204532539654, 0.136281799717717, 0.134864952272761, 0.133738386148036, 0.0711728636959696, 0.072031388688795, 0.0727536853228245, 0.0731581966147734, 0.0734424337399303, 0.0736637270702609, 0.000605277151497094, 0.000617268349064968, 0.000632975679951382, 0.000643904422677427, 0.000653775268094148, 0.000662225067910141, 0.26735354610469, 0.267515415990146, 0.26753155165617, 0.267553498616325, 0.267532284594615, 0.267510330320289, 0.000334158771646756, 0.000319032383145857, 0.000306074699839994, 0.000299153278494114, 0.000293956197852583, 0.000290171804454218, 0.000645975219899115, 0.000637548672578787, 0.000632375486965757, 0.000629579821884212, 0.000624956458229123, 0.000622456283217054, 0.0645188290106884, 0.0651539609630352, 0.0656417364889907, 0.0658996698322889, 0.0660715073023965, 0.0662034341510152), .Dim = c(6L, 12L)) Melt data: group variable value 1 1 PLDKRT 0 2 1 PLDKRT 0 3 1 PLDKRT 0 4 1 PLDKRT 0 5 1 PLDKRT 0 6 1 PLDKRT 0 Code: #Data_source 1 sensitivity_1=rbind(sensitivity_out1,sensitivity_out2) sensitivity_1=data.frame(sensitivity_1) colnames(sensitivity_1)=main_l #variable names sensitivity_1$group=1 #Data_source 2 sensitivity_2=rbind(sensitivity_out1[3:4,],sensitivity_out2[3:4,]) sensitivity_2=data.frame(sensitivity_2) colnames(sensitivity_2)=main_l sensitivity_2$group=2 sensitivity_pool=rbind(sensitivity_1,sensitivity_2) sensitivity_pool_m=melt(sensitivity_pool,id.vars="group") ggplot(data = sensitivity_pool_m, aes(x = variable, y = value)) + geom_boxplot(aes( fill= group), width = 0.8) Error: "Error in unit(tic_pos.c, "mm") : 'x' and 'units' must have length > 0" Update Figure out the error. I should use geom_boxplot(aes( fill= factor(group)), width = 0.8) rather than fill= group

    Read the article

  • Return nullable datetime from scalar, stored procedure

    - by molgan
    Hello I have a function that returns a date from a stored procedure, and it all works great til the value is NULL, how can I fix this so it works with null aswell? public DateTime? GetSomteDate(int SomeID) { DateTime? LimitDate= null; if (_entities.Connection.State == System.Data.ConnectionState.Closed) _entities.Connection.Open(); using (EntityCommand c = new EntityCommand("MyEntities.GetSomeDate", (EntityConnection)this._entities.Connection)) { c.CommandType = System.Data.CommandType.StoredProcedure; EntityParameter paramSomeID = new EntityParameter("SomeID", System.Data.DbType.Int32); paramSomeID.Direction = System.Data.ParameterDirection.Input; paramSomeID.Value = SomeID; c.Parameters.Add(paramSomeID); var x = c.ExecuteScalar(); if (x != null) LimitDate = (DateTime)x; return LimitDate.Value; }; }

    Read the article

  • setting url in yaml file for google app engin (page not found) problem

    - by mswallace
    I am new to python and I am super excited to learn. I am building my first app on app engin and I am not totally understanding why my yaml file is not resolving to the url that I set up. here is the code handlers: - url: .* script: main.py - url: /letmein/.* script: letmein.py so if I go to http://localhost:8080/letmein/ I get a link is brooken or page not found error. here is the python code that I have in letmein.py from google.appengine.ext import webapp from google.appengine.ext.webapp import util class LetMeInHandler(webapp.RequestHandler): def get(self): self.response.out.write('letmein!') def main(): application = webapp.WSGIApplication([('/letmein/', LetMeInHandler)], debug=True) util.run_wsgi_app(application) if __name__ == '__main__': main() thanks in advance for the help!

    Read the article

  • Is marshaling/ serialization in PHP as simple as serialize($var) ?

    - by Ygam
    here's a definition of marshaling from Wikipedia: In computer science, marshalling (similar to serialization) is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. It is typically used when data must be moved between different parts of a computer program or from one program to another. I have always done data serialization in php via its serialize function, usually on objects or arrays. But how is wikipedia's definition of marshaling/serialization takes place in this serizalize() function?

    Read the article

  • Python and Plone help

    - by Grenko
    Im using the plone cms and am having trouble with a python script. I get a name error "the global name 'open' is not defined". When i put the code in a seperate python script it works fine and the information is being passed to the python script becuase i can print the query. Code is below: #Import a standard function, and get the HTML request and response objects. from Products.PythonScripts.standard import html_quote request = container.REQUEST RESPONSE = request.RESPONSE # Insert data that was passed from the form query=request.query #print query f = open("blast_query.txt","w") for i in query: f.write(i) return printed I also have a second question, can i tell python to open a file in in a certain directory for example, If the script is in a certain loaction i.e. home folder, but i want the script to open a file at home/some_directory/some_directory can it be done?

    Read the article

  • A smarter way to do this jQuery?

    - by Nicky Christensen
    I have a a map on my site, and clicking regions a class should be toggled, on both hover and click, I've made a jQuery solution to do this, however, I think it can be done a bit smarter than i've done it? my HTML output is this: <div class="mapdk"> <a data-class="nordjylland" class="nordjylland" href="#"><span>Nordjylland</span></a> <a data-class="midtjylland" class="midtjylland" href="#"><span>Midtjylland</span></a> <a data-class="syddanmark" class="syddanmark" href="#"><span>Syddanmark</span></a> <a data-class="sjaelland" class="sjalland" href="#"><span>Sjælland</span></a> <a data-class="hovedstaden" class="hovedstaden" href="#"><span>Hovedstaden</span></a> </div> And my jQuery looks like: if ($jq(".area .mapdk").length) { $jq(".mapdk a.nordjylland").hover(function () { $jq(".mapdk").toggleClass("nordjylland"); }).click(function () { $jq(".mapdk").toggleClass("nordjylland"); }); $jq(".mapdk a.midtjylland").hover(function () { $jq(".mapdk").toggleClass("midtjylland"); }).click(function () { $jq(".mapdk").toggleClass("midtjylland"); }); } The thing is, that with what i've done, i have to make a hover and click function for every link i've got - I was thinking I might could keep it in one hover,click function, and then use something like $jq(this) ? But not sure how ?

    Read the article

  • java timer on current instance

    - by hspim
    import java.util.Scanner; import java.util.Timer; import java.util.TimerTask; public class Boggle { Board board; Player player; Timer timer; boolean active; static Scanner in = new Scanner(System.in); public Boggle() { board = new Board(4); timer = new Timer(); } public void newGame() { System.out.println("Please enter your name: "); String line = in.nextLine(); player = new Player(line); active = true; board.shuffle(); System.out.println(board); timer.schedule(new timesUP(), 20000); while(active) { String temp = in.nextLine(); player.addGuess(temp); } } public void endGame() { active = false; int score = Scoring.calculate(player, board); System.out.println(score); } class timesUP extends TimerTask { public void run() { endGame(); } } public static void main(String[] args) { Boggle boggle = new Boggle(); boggle.newGame(); } } I have the above class which should perform a loop for a given length of time and afterwards invoke an instance method. Essentially I need the loop in newGame() to run for a minute or so before endGame() is invoked on the current instance. However, using the Timer class I'm not sure how I would invoke the method I need on the current instance since I can't pass any parameters to the timertasks run method? Is there an easy way to do this or am I going about this the wrong way? (note: this is a console project only, no GUI) ========== code edited I've changed the code to the above following the recommendations, and it works almost as I expect however the thread still doesnt seem to end properly. I was the while loop would die and control would eventually come back to the main method. Any ideas?

    Read the article

  • Adding custom columns to Propel model?

    - by Hard-Boiled Wonderland
    At the moment I am using the below query: $claims = ClaimQuery::create('c') ->leftJoinUser() ->withColumn('CONCAT(User.Firstname, " ", User.Lastname)', 'name') ->withColumn('User.Email', 'email') ->filterByArray($conditions) ->paginate($page = $page, $maxPerPage = $top); However I then want to add columns manually, so I thought this would simply work: foreach($claims as &$claim){ $claim->actions = array('edit' => array( 'url' => $this->get('router')->generate('hera_claims_edit'), 'text' => 'Edit' ) ); } return array('claims' => $claims, 'count' => count($claims)); However when the data is returned Propel or Symfony2 seems to be stripping the custom data when it gets converted to JSON along with all of the superflous model data. What is the correct way of manually adding data this way?

    Read the article

  • Perl not closing TCP sockets if clients are no longer connected?

    - by LM
    The purpose of the application is to listen for a specific UDP multicast and then to forward the data to any TCP clients connected to the server. The code works fine, but I have a problem with the sockets not closing after the TCP clients disconnects. A socketsniffer utility shows the the sockets remain open and all the UDP data continues to be forwarded to the clients. The problem I believe is with the "if ($write-connected())" block as it always return true, even if the TCP client is no longer connected. I use standard Windows Telnet to connect to the server and to see the data. When I close telnet, the TCP socket is suppose to close on the server. Any reason why connected() show the connections as active even if they are not? Also, what alternative should I use then? Code: #!/usr/bin/perl use IO::Socket::Multicast; use IO::Socket; use IO::Select; my $tcp_port = "4550"; my $tcp_socket = IO::Socket::INET->new( Listen => SOMAXCONN, LocalAddr => '0.0.0.0', LocalPort => $tcp_port, Proto => 'tcp', ReuseAddr => 1, ); use Socket qw(IPPROTO_TCP TCP_NODELAY); setsockopt( $tcp_socket, IPPROTO_TCP, TCP_NODELAY, 1); use constant GROUP => '239.2.0.81'; use constant PORT => '6550'; my $udp_socket= IO::Socket::Multicast->new(Proto=>'udp',LocalPort=>PORT); $udp_socket->mcast_add(GROUP) || die "Couldn't set group: $!\n"; my $read_select = IO::Select->new(); my $write_select = IO::Select->new(); $read_select->add($tcp_socket); $read_select->add($udp_socket); ## Loop forever, reading data from the UDP socket and writing it to the ## TCP socket(s). while (1) { ## No timeout specified (see docs for IO::Select). This will block until a TCP ## client connects or we have data. my @read = $read_select->can_read(); foreach my $read (@read) { if ($read == $tcp_socket) { ## Handle connect from TCP client. Note that UDP connections are ## stateless (no accept necessary)... my $new_tcp = $read->accept(); $write_select->add($new_tcp); } elsif ($read == $udp_socket) { ## Handle data received from UDP socket... my $recv_buffer; $udp_socket->recv($recv_buffer, 1024, undef); ## Write the data read from UDP out to the TCP client(s). Again, no ## timeout. This will block until a TCP socket is writable. my @write = $write_select->can_write(); foreach my $write (@write) { ## Make sure the socket is still connected before writing. if ($write->connected()) { $write->send($recv_buffer); } else { $write_select->remove($write); close $write; } } } } }

    Read the article

  • Which to use, XMP or RDF?

    - by zotty
    What's the difference between RDF and XMP? From what I can tell, XMP is derived from RDF... so what does it offer that RDF doesn't? My particular situation is this: I've got some images which need tagging with details of how an experiment was performed, and what sort of data analysis has been performed on the images. A colleague of mine is pushing for XMP, but he's thinking of the images as photos - they're not really, they're just bits of data. From what I've seen (mainly by opening images in notepad++) the XMP data looks very similar to RDF - even so far as using RDF in the tag names (e.g. <rdf:Seq>). I'd like this data to be usable by other people who use similar instruments for similar experiments, so creating a mini standard (schema?) seems like the way to go. Apologies for the lack of fundemental understanding - I'm a Doctor, not a programmer! If it makes any difference, the language of choice will be C#. Edit for more information: First off, thanks for the excellent replies - thinking of XMP as a vocabulary for RDF makes things a lot clearer. The sort of data I'll be storing wont be avaliable in any of the pre-defined sets. It'll detail experimental set ups, locations and results. I think using RDF is the way to go.

    Read the article

  • How to convert records including 'include' associations to JSON.

    - by 99miles
    If I do something like: result = Appointment.find( :all, :include => :staff ) logger.debug { result.inspect } then it only prints out the Appointment data, and not the associated staff data. If I do result[0].staff.inpsect then I get the staff data of course. The problem is I want to return this to AJAX as JSON, including the staff rows. How do I force it to include the staff rows, or do I have to loop through and create something manually?

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Keeping a certain row or column in an HTML table fixed

    - by WarDoGG
    I have huge amounts of data populating an HTML <table> having more than 200 rows and 200 columns. However, when i scroll the page horizontally or vertically to view the data, the header columns (like th for instance) go beyond the page. How can i scroll through the table and still keep the top row and leftmost column fixed so that i will always know what data im seeing.

    Read the article

  • Atomic swap in GNU C++

    - by Steve
    I want to verify that my understanding is correct. This kind of thing is tricky so I'm almost sure I am missing something. I have a program consisting of a real-time thread and a non-real-time thread. I want the non-RT thread to be able to swap a pointer to memory that is used by the RT thread. From the docs, my understanding is that this can be accomplished in g++ with: // global Data *rt_data; Data *swap_data(Data *new_data) { #ifdef __GNUC__ // Atomic pointer swap. Data *old_d = __sync_lock_test_and_set(&rt_data, new_data); #else // Non-atomic, cross your fingers. Data *old_d = rt_data; rt_data = new_data; #endif return old_d; } This is the only place in the program (other than initial setup) where rt_data is modified. When rt_data is used in the real-time context, it is copied to a local pointer. For old_d, later on when it is sure that the old memory is not used, it will be freed in the non-RT thread. Is this correct? Do I need volatile anywhere? Are there other synchronization primitives I should be calling? By the way I am doing this in C++, although I'm interested in whether the answer differs for C. Thanks ahead of time.

    Read the article

  • Avoiding nesting two for loops

    - by chavanak
    Hi, Please have a look at the code below: import string from collections import defaultdict first_complex=open( "residue_a_chain_a_b_backup.txt", "r" ) first_complex_lines=first_complex.readlines() first_complex_lines=map( string.strip, first_complex_lines ) first_complex.close() second_complex=open( "residue_a_chain_a_c_backup.txt", "r" ) second_complex_lines=second_complex.readlines() second_complex_lines=map( string.strip, second_complex_lines ) second_complex.close() list_1=[] list_2=[] for x in first_complex_lines: if x[0]!="d": list_1.append( x ) for y in second_complex_lines: if y[0]!="d": list_2.append( y ) j=0 list_3=[] list_4=[] for a in list_1: pass for b in list_2: pass if a==b: list_3.append( a ) kvmap=defaultdict( int ) for k in list_3: kvmap[k]+=1 print kvmap Normally I use izip or izip_longest to club two for loops, but this time the length of the files are different. I don't want a None entry. If I use the above method, the run time becomes incremental and useless. How am I supposed to get the two for loops going? Cheers, Chavanak

    Read the article

  • wpf legacy server call

    - by Shah Al
    Hi, We have a legacy application running tomcat that publishes data in a simple html table. I have no control on the remote server publishing the data. I am looking to extract the data into a WPF desktop application and display it as a table. Is there any way a WPF application can make a url call, get the result and parse the data. This would be similar to AJAX from JSP. Any thoughts/ideas? Please advice. Regards,

    Read the article

  • php json encode

    - by hafizan
    This below output come from php json_encode.What we see here is 0849 is twice.Since javascript only use sn to get value why we need the "0" value.The main problem is speed execution. If 800 kb data might can be reduce to 400 kb {"success":"true","total":968,"data":[{"0":"0849","sn":"0849" }] If no solution i have to make a script to filter in json_encode so no need twice data transter.

    Read the article

< Previous Page | 803 804 805 806 807 808 809 810 811 812 813 814  | Next Page >