Search Results

Search found 67143 results on 2686 pages for 'complex data types'.

Page 768/2686 | < Previous Page | 764 765 766 767 768 769 770 771 772 773 774 775  | Next Page >

  • Handling Datetime with decimal '2010-02-14 20:18:58.313000000'

    - by AaronLS
    In SQL Server I have some textual data in varchar fields I am trying to convert to datetime's. The funny thing is this data at some point was in a datetime field, exported to flat file, and now I am reimporting it. The problem is it is in this format 2010-02-14 20:18:58.313000000 and the conversion to datetime fails. I have no idea how it ended up like this when it was originally extracted from a datetime column. Basically a table was exported to a flat file by someone else. The original table was lost. I am reimporting back from the flatfile. I could just drop the decimal but this would be like throwing out some of the data. I'd like to maintain as much precision as possible. How can I import this data from the varchar column back into a datetime column and preserve as much accuracy as possible?

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • php tree structure problem

    - by agazerboy
    Hi All, I have all my data in my database. It has following 4 columns id source_clust target_clust result_clust 1 7 72 649 2 9 572 650 3 649 454 651 4 32 650 435 This data is like tree structure. source_clust and target_clust generate target_clust. target_clust can be source_clust or target_clust to make a new target_clust. Is there any php function or class that I can use to generate tree structure for my data???? I see this MySql site they are doing exactly what I need but I couldn't find how to implement that query in my data. Thanks !

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • asynchronous sockets

    - by user158182
    i need to receive an acknowledgement from server when receives a data for confirming that data has received the server and how to use GetSocketOption() in asynchronous method

    Read the article

  • Reading from a database located in the Program Files folder using ODBC

    - by Dabblernl
    We have an application that stores its database files in a subfolder of the Program Files directory. These files are redirected to the VirtualStore in Vista and Windows 7. We represent data from the database using Microsoft DataReports (VB6). So far so good. But we now want to use Crystal Reports XI to represent data from the database. Our idea is to NOT pass this data to CR from our program, but to have CR retreive it from the database using a a system DSN through ODBC. In this way we hope to present our users with more flexibility in designing their own reports. What we do want to ensure though is that these system DSNs are configured correctly when the user installs our program or when the program calls the Crystal Report. Is there a smart way to do this using System variables for instance, instead of having to write a routine that checks for OS-version, whether UAC is enabled on the OS, whether the write restrictions on the Program Files folder have been lifted, etc and then adapts he System DSN to point to either the C:\Program Files\OurApp\Data folder, or the C:\Users\User\AppData\VirtualStore\Program Files\OurApp\Data folder? Suggestions for an entirely different approach are welcome too!

    Read the article

  • WCF Callback Contract Fiddler Debugging

    - by DavyMac23
    I'm trying to debug a WCF self-hosted service utilizing callbacks. Every 1/2 I make a callback to a SL3 app to display the latest changes (yes, there are tons of changes every 1/2 second). There are 3 services, one has new data every 1/2 second, one has new data every second, and another has new data every hour. I've set all of my timeouts, Send, Receive, Open and Close to 2 days 23 hours and 23 minutes, but I still get time out errors on the service side when I go to issue the callback. I'm using Fiddler and I notice that my service that has new data every hour is still showing up every 10 seconds. Fiddlier shows a Body length of -1, then 10 seconds later it changes to 0 and the response is HTTP 200, but the overall elapsed time is 10 seconds. What's going on here?

    Read the article

  • ArrayAdapter throwing ArrayIndexOutOfBoundsException

    - by alex
    I am getting an error of an array out of bounce error when i am using my custom array adapter. I am wondering if there are any coding errors I have overlooked. Here is the error log 06-10 20:21:53.254: E/AndroidRuntime(315): FATAL EXCEPTION: main 06-10 20:21:53.254: E/AndroidRuntime(315): java.lang.RuntimeException: Unable to start activity ComponentInfo{alex.android.galaxy.tab.latest/alex.android.galaxy.tab.latest.Basic_db_output}: java.lang.ArrayIndexOutOfBoundsException 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.os.Handler.dispatchMessage(Handler.java:99) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.os.Looper.loop(Looper.java:123) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.main(ActivityThread.java:4627) 06-10 20:21:53.254: E/AndroidRuntime(315): at java.lang.reflect.Method.invokeNative(Native Method) 06-10 20:21:53.254: E/AndroidRuntime(315): at java.lang.reflect.Method.invoke(Method.java:521) 06-10 20:21:53.254: E/AndroidRuntime(315): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 06-10 20:21:53.254: E/AndroidRuntime(315): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 06-10 20:21:53.254: E/AndroidRuntime(315): at dalvik.system.NativeStart.main(Native Method) 06-10 20:21:53.254: E/AndroidRuntime(315): Caused by: java.lang.ArrayIndexOutOfBoundsException 06-10 20:21:53.254: E/AndroidRuntime(315): at alex.android.galaxy.tab.latest.Basic_db_output.onCreate(Basic_db_output.java:44) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) I am basing my code example on this How to use ArrayAdapter<myClass> ArrayList list = new ArrayList(); for(int i=1; i <= 3; i++) { Reader reader = new ResultsReader("android_galaxy_tab_latest/src/quiz"+i+".txt"); reader.read(); String str = ((ResultsReader)reader).getInput(); String data[] = str.split("<.>"); Question q = new Question(); q.question = data[0]; q.answer = Integer.parseInt(data[1]); q.choice1 = data[2]; q.choice2 = data[3]; q.choice3 = data[4]; list.add(q); }

    Read the article

  • Double pointer const-correctness warnings in C

    - by Michael Koval
    You can obviously cast a pointer to non-const data to a a pointer of the same type to const data: int *x = NULL; int const *y = x; Adding additional const qualifiers to match the additional indirection should logically work the same way: int * *x = NULL; int *const *y = x; /* okay */ int const *const *z = y; /* warning */ Compiling this with GCC or Clang with the -Wall flag, however, results in the following warning: test.c:4:23: warning: initializing 'int const *const *' with an expression of type 'int *const *' discards qualifiers in nested pointer types int const *const *z = y; /* warning */ ^ ~ Why does adding an additional const qualifier "discard qualifiers in nested pointer types"?

    Read the article

  • lua userdata gc

    - by anon
    Is it possible for a piece of lua user data to hold reference to a lua object? (Like a table, or another piece of user data?). Basically, what I want to know is: Can I create a piece of userdata in such a way taht when the gc runs, the user data can say: "Hey! I'm holding references to these other objects, mark them as well." Thanks!

    Read the article

  • javascript exceed timeout

    - by user1866265
    I use jquery to develop mobile application, here is my code below the problem that when I add 5 or 6 line to the page contained all goes well. but if I add multiple line displays error message: Javascript execution exeeded timeout. function succes_recu_list_rubrique(tx, results) //apés avoire remplir sqlite { console.log('ENTRééééééééééééééé---') $('#lbtn').prepend("<legend>Selectionner un Rubrique</legend><br>"); for( var i=0; i<results.rows.length; i++ ) //Remplir tableau liste des identifiants étapes { $('#lbtn').append("<input name='opt1' checked type='radio' value="+results.rows.item(i).IdRubrique+" id="+results.rows.item(i).IdRubrique+" />"); $('#lbtn').append('<label for='+results.rows.item(i).IdRubrique+'>'+results.rows.item(i).LibelleRubrique+'</label>'); } $('#lbtn').append('<a href="#page_dialog2" class="offer2" data-rel="dialog" data-role="button" >Consulter</a>').trigger('create'); $('#lbtn').append('<a href="#'+id_grp_rub+'" data-role="button" data-rel="back" data-theme="c" >Cancel</a> ').trigger('create'); }

    Read the article

  • Django: How do I position a page when using Django templates

    - by swisstony
    I have a web page where the user enters some data and then clicks a submit button. I process the data and then use the same Django template to display the original data, the submit button, and the results. When I am using the Django template to display results, I would like the page to be automatically scrolled down to the part of the page where the results begin. This allows the user to scroll back up the page if she wants to change her original data and click submit again. Hopefully, there's some simple way of doing this that I can't see at the moment.

    Read the article

  • modifying a cloned element and reinserting in dom - jquery

    - by Praveen Prasad
    //dom <div id='toBeCloned'><span>Some name</span></div> <div id='targetElm'></div> //js $(function () { //creating a clone var _clone = $('#toBeCloned').clone(true); // target element var _target = $('#targetElm'); //now target element is to be filled with cloned element with data of span changed var _someData = [1, 2, 3, 4]; //loop through data $.each(_someData, function (i, data) { var _newElm = {}; $.extend(_newElm, _clone);//copy cloned to new Elm _newElm.find('span').html(data); //edit content of span alert('p'); // alert added to show that append in next line inspite of adding new element to dom just updating the previous one _target.append(_newElm);//update target }); }); expected Result: 1 2 3 4 resut iam getting is 4

    Read the article

  • jquery change load get url

    - by john morris
    Ok so I have a php page with data dynamically outputs depending on a $_GET variable ['file']. I have another page (index.php) which has a jQuery script that uses the load() function to load the php page. I have a list of links and when you click on one, it needs to change the $_GET variable to load, then refresh the load() jQuery variable. Heres a snippet: $("#remote-files").load("data.php?file=wat.txt"); $(".link1").mousedown(function() { $("#remote-files").load("data.php?file=link1.txt"); }); As you can see it loads the data into a div with the ID of remote-files. Is there a better way to do this, like update the page with the new get variable instead of redefining a new load function?

    Read the article

  • Calling Web Services Asynchronously in Page_Load Event

    - by Umar Siddique
    I'm working on a web application using VB.NET. In page load event am calling a remote web service which take time to bring the data. During this process none of the other contents on page are shown(render). I want to call this remote web service asynchronously so that other data of page is displayed and web service data will be displayed when its available.

    Read the article

  • Using java.util.logging, is it possible to restart logs after a certain period of time?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be

    - by Mikey John
    Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be done ? Here's my stored procedure: set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[AddPerson] @Data AS XML AS INSERT INTO Persons (name,image_binary) SELECT rowWals.value('./@Name', 'varchar(64)') AS [Name], rowWals.value('./@ImageBinary', 'varbinary(MAX)') AS [ImageBinary] FROM @Data.nodes ('/Data/Names') as b(rowVals) SELECT SCOPE_IDENTITY() AS Id In my schema Name is of type String and ImageBinary is o type byte[].

    Read the article

  • How to convert records including 'include' associations to JSON.

    - by 99miles
    If I do something like: result = Appointment.find( :all, :include => :staff ) logger.debug { result.inspect } then it only prints out the Appointment data, and not the associated staff data. If I do result[0].staff.inpsect then I get the staff data of course. The problem is I want to return this to AJAX as JSON, including the staff rows. How do I force it to include the staff rows, or do I have to loop through and create something manually?

    Read the article

  • Is marshaling/ serialization in PHP as simple as serialize($var) ?

    - by Ygam
    here's a definition of marshaling from Wikipedia: In computer science, marshalling (similar to serialization) is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. It is typically used when data must be moved between different parts of a computer program or from one program to another. I have always done data serialization in php via its serialize function, usually on objects or arrays. But how is wikipedia's definition of marshaling/serialization takes place in this serizalize() function?

    Read the article

  • Group variables in a boxplot in R

    - by tao.hong
    I am trying to generate a boxplot whose data come from two scenarios. In the plot, I would like to group boxes by their names (So there will be two boxes per variable). I know ggplot would be a good choice. But I got errors which I could not figure out. Can anyone give me some suggestions? sensitivity_out1 structure(c(0.0522902104339716, 0.0521369824334004, 0.0520240345973737, 0.0519818337359876, 0.051935071418996, 0.0519089404325544, 0.000392698277338341, 0.000326135474295325, 0.000280863338343747, 0.000259631566041935, 0.000246594043996332, 0.000237923540393391, 0.00046732650331544, 0.000474448907808135, 0.000478287273678457, 0.000480194683464109, 0.000480631753078668, 0.000481760272726273, 0.000947965771207979, 0.000944821699830455, 0.000939631071343889, 0.000937186900570605, 0.000936007346568281, 0.000934756220144141, 0.00132442589501872, 0.00132658367774979, 0.00133334696220742, 0.00133622384928092, 0.0013381577476241, 0.00134005741746304, 0.0991622968751298, 0.100791399440082, 0.101946808417405, 0.102524244727408, 0.102920085260477, 0.103232984259916, 0.0305219507186844, 0.0304635269233494, 0.0304161055015213, 0.0303742106794513, 0.0303381888169022, 0.0302996157711171, 1.94268588634518e-05, 2.23991225564447e-05, 2.5756135487907e-05, 2.79997917298194e-05, 3.00753967077715e-05, 3.16270817369878e-05, 0.544701146678523, 0.542887331601984, 0.541632986366816, 0.541005610554556, 0.540617004208336, 0.540315690692195, 0.000453386694666078, 0.000448473414508756, 0.00044692043197248, 0.000444826296854332, 0.000445747996014684, 0.000444764303682453, 0.000127569551159321, 0.000128422491392669, 0.00012933662856487, 0.000129941842982939, 0.000129578971489026, 0.000131113075233758, 0.00684610571790029, 0.00686349387897349, 0.00687468164010565, 0.00687880720347743, 0.00688275579317197, 0.00687822247621936), .Dim = c(6L, 12L)) out2 structure(c(0.0189965816735366, 0.0189995096225103, 0.0190099362589894, 0.0190033523148514, 0.01900896721937, 0.0190099427513381, 0.00192043989797585, 0.00207303208721059, 0.00225931163225165, 0.0024049969048389, 0.00252310364086785, 0.00262940166568126, 0.00195164921633517, 0.00190079923515755, 0.00186139563778548, 0.00184188171395076, 0.00183248544676564, 0.00182492970673969, 1.83038731485927e-05, 1.98252671720347e-05, 2.14794764479231e-05, 2.30713122969332e-05, 2.4484220713564e-05, 2.55958833705284e-05, 0.0428066864455102, 0.0431686808647809, 0.0434411033615353, 0.0435883377765726, 0.0436690169266633, 0.0437340464360965, 0.145288252474567, 0.141488776430307, 0.138204532539654, 0.136281799717717, 0.134864952272761, 0.133738386148036, 0.0711728636959696, 0.072031388688795, 0.0727536853228245, 0.0731581966147734, 0.0734424337399303, 0.0736637270702609, 0.000605277151497094, 0.000617268349064968, 0.000632975679951382, 0.000643904422677427, 0.000653775268094148, 0.000662225067910141, 0.26735354610469, 0.267515415990146, 0.26753155165617, 0.267553498616325, 0.267532284594615, 0.267510330320289, 0.000334158771646756, 0.000319032383145857, 0.000306074699839994, 0.000299153278494114, 0.000293956197852583, 0.000290171804454218, 0.000645975219899115, 0.000637548672578787, 0.000632375486965757, 0.000629579821884212, 0.000624956458229123, 0.000622456283217054, 0.0645188290106884, 0.0651539609630352, 0.0656417364889907, 0.0658996698322889, 0.0660715073023965, 0.0662034341510152), .Dim = c(6L, 12L)) Melt data: group variable value 1 1 PLDKRT 0 2 1 PLDKRT 0 3 1 PLDKRT 0 4 1 PLDKRT 0 5 1 PLDKRT 0 6 1 PLDKRT 0 Code: #Data_source 1 sensitivity_1=rbind(sensitivity_out1,sensitivity_out2) sensitivity_1=data.frame(sensitivity_1) colnames(sensitivity_1)=main_l #variable names sensitivity_1$group=1 #Data_source 2 sensitivity_2=rbind(sensitivity_out1[3:4,],sensitivity_out2[3:4,]) sensitivity_2=data.frame(sensitivity_2) colnames(sensitivity_2)=main_l sensitivity_2$group=2 sensitivity_pool=rbind(sensitivity_1,sensitivity_2) sensitivity_pool_m=melt(sensitivity_pool,id.vars="group") ggplot(data = sensitivity_pool_m, aes(x = variable, y = value)) + geom_boxplot(aes( fill= group), width = 0.8) Error: "Error in unit(tic_pos.c, "mm") : 'x' and 'units' must have length > 0" Update Figure out the error. I should use geom_boxplot(aes( fill= factor(group)), width = 0.8) rather than fill= group

    Read the article

  • A smarter way to do this jQuery?

    - by Nicky Christensen
    I have a a map on my site, and clicking regions a class should be toggled, on both hover and click, I've made a jQuery solution to do this, however, I think it can be done a bit smarter than i've done it? my HTML output is this: <div class="mapdk"> <a data-class="nordjylland" class="nordjylland" href="#"><span>Nordjylland</span></a> <a data-class="midtjylland" class="midtjylland" href="#"><span>Midtjylland</span></a> <a data-class="syddanmark" class="syddanmark" href="#"><span>Syddanmark</span></a> <a data-class="sjaelland" class="sjalland" href="#"><span>Sjælland</span></a> <a data-class="hovedstaden" class="hovedstaden" href="#"><span>Hovedstaden</span></a> </div> And my jQuery looks like: if ($jq(".area .mapdk").length) { $jq(".mapdk a.nordjylland").hover(function () { $jq(".mapdk").toggleClass("nordjylland"); }).click(function () { $jq(".mapdk").toggleClass("nordjylland"); }); $jq(".mapdk a.midtjylland").hover(function () { $jq(".mapdk").toggleClass("midtjylland"); }).click(function () { $jq(".mapdk").toggleClass("midtjylland"); }); } The thing is, that with what i've done, i have to make a hover and click function for every link i've got - I was thinking I might could keep it in one hover,click function, and then use something like $jq(this) ? But not sure how ?

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

< Previous Page | 764 765 766 767 768 769 770 771 772 773 774 775  | Next Page >