Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 424/1369 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • How can I pipe two Perl CORE::system commands in a cross-platform way?

    - by Pedro Silva
    I'm writing a System::Wrapper module to abstract away from CORE::system and the qx operator. I have a serial method that attempts to connect command1's output to command2's input. I've made some progress using named pipes, but POSIX::mkfifo is not cross-platform. Here's part of what I have so far (the run method at the bottom basically calls system): package main; my $obj1 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{''}], input => ['input.txt'], description => 'Concatenate input.txt to STDOUT', ); my $obj2 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{'$_ = reverse $_}'}], description => 'Reverse lines of input input', output => { '>' => 'output' }, ); $obj1->serial( $obj2 ); package System::Wrapper; #... sub serial { my ($self, @commands) = @_; eval { require POSIX; POSIX->import(); require threads; }; my $tmp_dir = File::Spec->tmpdir(); my $last = $self; my @threads; push @commands, $self; for my $command (@commands) { croak sprintf "%s::serial: type of args to serial must be '%s', not '%s'", ref $self, ref $self, ref $command || $command unless ref $command eq ref $self; my $named_pipe = File::Spec->catfile( $tmp_dir, int \$command ); POSIX::mkfifo( $named_pipe, 0777 ) or croak sprintf "%s::serial: couldn't create named pipe %s: %s", ref $self, $named_pipe, $!; $last->output( { '>' => $named_pipe } ); $command->input( $named_pipe ); push @threads, threads->new( sub{ $last->run } ); $last = $command; } $_->join for @threads; } #... My specific questions: Is there an alternative to POSIX::mkfifo that is cross-platform? Win32 named pipes don't work, as you can't open those as regular files, neither do sockets, for the same reasons. 2. The above doesn't quite work; the two threads get spawned correctly, but nothing flows across the pipe. I suppose that might have something to do with pipe deadlocking or output buffering. What throws me off is that when I run those two commands in the actual shell, everything works as expected. Point 2 is solved; a -p fifo file test was not testing the correct file.

    Read the article

  • Need help to properly remove duplicates in NHibernate

    - by Michael D. Kirkpatrick
    Here is the problem I am having. I have a database with over 100 records in it. I am paging through the data to get 9 results at a time. When I added a check to see if items are active, it caused the results to start doubling up. A little background: "Product" is the actual product line "ProductSkus" are the actual products that exist in the product line When there is more then 1 ProductSku within Product, it causes a duplicate entry to be returned. See the NHibernate Query below: result = this.Session.CreateCriteria<Model.Product>() .Add(Expression.Eq("IsActive", true)) .AddOrder(new Order("Name", true)) .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // This part of the query duplicates the products .CreateAlias("ProductSkus", "ProdSkus", JoinType.InnerJoin) .Add(Expression.Eq("ProdSkus.IsActive", true)) .CreateAlias("ProductToSubcategory", "ProdToSubcat") .CreateAlias("ProdToSubcat.ProductSubcategory", "ProdSubcat") .Add(Expression.Eq("ProdSubcat.ID", subCatId)) // This part takes out the duplicate products - Removes too many items... // Turns out that with .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // it gets 9 records back then the duplicates are removed. // Example: // Total Records over 100 // Max = 9 // 4 Duplicates removed // Yields 5 records when there should be 9 // Why??? This line is ran in NHibernate on the data after it has been extracted from the SQL server. .SetResultTransformer(new NHibernate.Transform.DistinctRootEntityResultTransformer()) .List<Model.Product>(); I added the DistinctRootEntityResultTransformer to clean up the duplicates. The problem is that it pulls 9 records back that contains duplicates. DistinctRootEntityResultTransformer then cleans up the duplicates in the 9 records. I am basically needing a distinct statement to be ran on the SQL server to begin with. However, distinct on SQL is not going to work since NHibernate by default wants to add every field from every table in the select part of the statement. I am only using the fields that belong to the root table to begin with (Model.Product). If I can tell NHibernate to not add the fields to the joined tables into the select part of the statement along with adding Distinct, it would work. I use NHibernare Profiler to see the actual query: SELECT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, prodskus1_.ID as ID373_0_, prodskus1_.Description as Descript2_373_0_, prodskus1_.PartNumber as PartNumber373_0_, prodskus1_.Price as Price373_0_, prodskus1_.IsKit as IsKit373_0_, prodskus1_.IsActive as IsActive373_0_, prodskus1_.IsFeaturedProduct as IsFeatur7_373_0_, prodskus1_.DateAdded as DateAdded373_0_, prodskus1_.Weight as Weight373_0_, prodskus1_.TimesViewed as TimesVi10_373_0_, prodskus1_.TimesOrdered as TimesOr11_373_0_, prodskus1_.ProductID as ProductID373_0_, prodskus1_.OverSizedBoxID as OverSiz13_373_0_, prodtosubc2_.ID as ID362_1_, prodtosubc2_.MasterSubcategory as MasterSu2_362_1_, prodtosubc2_.ProductID as ProductID362_1_, prodtosubc2_.ProductSubcategoryID as ProductS4_362_1_, prodsubcat3_.ID as ID352_2_, prodsubcat3_.Name as Name352_2_, prodsubcat3_.ProductCategoryID as ProductC3_352_2_, prodsubcat3_.ImageID as ImageID352_2_, prodsubcat3_.TriggerShow as TriggerS5_352_2_ FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc If I hand modify the query and run it directly on the SQL server I get the result set I want (I removed all the extra fields in the select section and added DISTINCT): SELECT DISTINCT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc The big question I now must ask is... What must I change in the NHibernate Query to ultimately get the exact same result? Thanks in advance.

    Read the article

  • Normalizing a table

    - by Alex
    I have a legacy table, which I can't change. The values in it can be modified from legacy application (application also can't be changed). Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries. The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday) The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query. The legacy table looks like this: CREATE TABLE [business_days]( [country_code] [char](3) , [state_code] [varchar](4) , [calendar_year] [int] , [calendar_month] [varchar](31) , [calendar_month2] [varchar](31) , [calendar_month3] [varchar](31) , [calendar_month4] [varchar](31) , [calendar_month5] [varchar](31) , [calendar_month6] [varchar](31) , [calendar_month7] [varchar](31) , [calendar_month8] [varchar](31) , [calendar_month9] [varchar](31) , [calendar_month10] [varchar](31) , [calendar_month11] [varchar](31) , [calendar_month12] [varchar](31) , misc. ) Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X): ' XX XX ..' I'd like the new table to look like so: create table #Temp (country varchar(3), state varchar(4), date datetime, hours int) And I'd like to only have rows for days which are off (marked with X or H from previous query) What I ended up doing, so far is this: Create a temporary-intermediate table, that looks like this: create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int) To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc. Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2. To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table. [edit added code] Declare @country_code char(3) Declare @state_code varchar(4) Declare @calendar_year int Declare @calendar_month varchar(31) Declare @month_code int Declare @calendar_date datetime Declare @day_code int WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0) BEGIN Select Top 1 @country_code = t2.country_code, @state_code = t2.state_code, @calendar_year = t2.calendar_year, @calendar_month = t2.calendar_month, @month_code = t2.month_code From #Temp_2 t2 -- where processed = 0 set @day_code = 1 while @day_code <= 31 begin if substring(@calendar_month, @day_code, 1) = 'X' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 8) end if substring(@calendar_month, @day_code, 1) = 'H' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 4) end set @day_code = @day_code + 1 end delete from #Temp_2 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code --update #Temp_2 set processed = 1 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code END I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion. After having the temp table, I'm planning to do (dates would be coming from a table): select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8) Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result. (I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)

    Read the article

  • [Qt/C++] Need help in optimizing a drawing code ...

    - by Ahmad
    Hello all ... I needed some help in trying to optimize this code portion ... Basically here's the thing .. I'm making this 'calligraphy pen' which gives the calligraphy effect by simply drawing a lot of adjacent slanted lines ... The problem is this: When I update the draw region using update() after every single draw of a slanted line, the output is correct, in the sense that updates are done in a timely manner, so that everything 'drawn' using the pen is immediately 'seen' the drawing.. however, because a lot (100s of them) of updates are done, the program slows down a little when run on the N900 ... When I try to do a little optimization by running update after drawing all the slanted lines (so that all lines are updated onto the drawing board through a single update() ), the output is ... odd .... That is, immediately after drawing the lines, they lines seem broken (they have vacant patches where the drawing should have happened as well) ... however, if I trigger a redrawing of the form window (say, by changing the size of the form), the broken patches are immediately fixed !! When I run this program on my N900, it gets the initial broken output and stays like that, since I don't know how to enforce a redraw in this case ... Here is the first 'optimized' code and output (partially correct/incorrect) void Canvas::drawLineTo(const QPoint &endPoint) { QPainter painter(&image); painter.setPen(QPen(Qt::black,1,Qt::SolidLine,Qt::RoundCap,Qt::RoundJoin)); int fx=0,fy=0,k=0; qPoints.clear(); connectingPointsCalculator2(qPoints,lastPoint.x(),lastPoint.y(),endPoint.x(),endPoint.y()); int i=0; int x,y; for(i=0;i<qPoints.size();i++) { x=qPoints.at(i).x(); y=qPoints.at(i).y(); painter.setPen(Qt::black); painter.drawLine(x-5,y-5,x+5,y+5); **// Drawing slanted lines** } **//Updating only once after many draws:** update (QRect(QPoint(lastPoint.x()-5,lastPoint.y()-5), QPoint(endPoint.x()+5,endPoint.y()+5)).normalized()); modified = true; lastPoint = endPoint; } Image right after scribbling on screen: http://img823.imageshack.us/img823/8755/59943912.png After re-adjusting the window size, all the broken links above are fixed like they should be .. Here is the second un-optimized code (its output is correct right after drawing, just like in the second picture above): void Canvas::drawLineTo(const QPoint &endPoint) { QPainter painter(&image); painter.setPen(QPen(Qt::black,1,Qt::SolidLine,Qt::RoundCap,Qt::RoundJoin)); int fx=0,fy=0,k=0; qPoints.clear(); connectingPointsCalculator2(qPoints,lastPoint.x(),lastPoint.y(),endPoint.x(),endPoint.y()); int i=0; int x,y; for(i=0;i<qPoints.size();i++) { x=qPoints.at(i).x(); y=qPoints.at(i).y(); painter.setPen(Qt::black); painter.drawLine(x-5,y-5,x+5,y+5); **// Drawing slanted lines** **//Updating repeatedly during the for loop:** update(QRect(QPoint(x-5,y-5), QPoint(x+5,y+5)).normalized());//.adjusted(-rad,-rad,rad,rad)); } modified = true; int rad = (myPenWidth / 2) + 2; lastPoint = endPoint; } Can anyone see what the issue might be ?

    Read the article

  • can't able to integrate base64decode in my class

    - by MaheshBabu
    Hi folks, i am getting the image that is in base64 encoded format. I need to decode it. i am writing the code for decoding is + (NSData *) base64DataFromString: (NSString *)string { unsigned long ixtext, lentext; unsigned char ch, input[4], output[3]; short i, ixinput; Boolean flignore, flendtext = false; const char *temporary; NSMutableData *result; if (!string) return [NSData data]; ixtext = 0; temporary = [string UTF8String]; lentext = [string length]; result = [NSMutableData dataWithCapacity: lentext]; ixinput = 0; while (true) { if (ixtext >= lentext) break; ch = temporary[ixtext++]; flignore = false; if ((ch >= 'A') && (ch <= 'Z')) ch = ch - 'A'; else if ((ch >= 'a') && (ch <= 'z')) ch = ch - 'a' + 26; else if ((ch >= '0') && (ch <= '9')) ch = ch - '0' + 52; else if (ch == '+') ch = 62; else if (ch == '=') flendtext = true; else if (ch == '/') ch = 63; else flignore = true; if (!flignore) { short ctcharsinput = 3; Boolean flbreak = false; if (flendtext) { if (ixinput == 0) break; if ((ixinput == 1) || (ixinput == 2)) { ctcharsinput = 1; else ctcharsinput = 2; ixinput = 3; flbreak = true; } input[ixinput++] = ch; if (ixinput == 4) ixinput = 0; output[0] = (input[0] << 2) | ((input[1] & 0x30) >> 4); output[1] = ((input[1] & 0x0F) << 4) | ((input[2] & 0x3C) >> 2); output[2] = ((input[2] & 0x03) << 6) | (input[3] & 0x3F); for (i = 0; i < ctcharsinput; i++) [result appendBytes: &output[i] length: 1]; } if (flbreak) break; } return result; } i am calling this in my method like this NSData *data = [base64DataFromString:theXML]; theXML is encoded data. but it shows error decodeBase64 undeclared. How can i use this method. can any one pls help me. Thank u in advance.

    Read the article

  • form_for called in a loop overloads IDs and associates fields and labels incorrectly

    - by Katy Levinson
    Rails likes giving all of my fields the same IDs when they are generated in a loop, and this causes trouble. <% current_user.subscriptions.each do |s| %> <div class="subscription_listing"> <%= link_to_function s.product.name, "toggle_delay(this)"%> in <%= s.calc_time_to_next_arrival %> days. <div class="modify_subscription"> <%= form_for s, :url => change_subscription_path(s) do |f| %> <%= label_tag(:q, "Days to delay:") %> <%= text_field_tag(:query) %> <%= check_box_tag(:always) %> <%= label_tag(:always, "Apply delay to all future orders") %> <%= submit_tag("Change") %> <% end %> <%= link_to 'Destroy', s, :confirm => 'Are you sure?', :method => :delete %> </div> </div> <% end %> Produces <div class="subscription_listing"> <a href="#" onclick="toggle_delay(this); return false;">Pasta</a> in 57 days. <div class="modify_subscription"> <form accept-charset="UTF-8" action="/subscriptions/7/change" class="edit_subscription" id="edit_subscription_7" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="&#x2713;" /><input name="_method" type="hidden" value="put" /><input name="authenticity_token" type="hidden" value="s5LJffuzmbEMkSrez8b3KLVmDWN/PGmDryXhp25+qc4=" /></div> <label for="q">Days to delay:</label> <input id="query" name="query" type="text" /> <input id="always" name="always" type="checkbox" value="1" /> <label for="always">Apply delay to all future orders</label> <input name="commit" type="submit" value="Change" /> </form> <a href="/subscriptions/7" data-confirm="Are you sure?" data-method="delete" rel="nofollow">Destroy</a> </div> </div> <div class="subscription_listing"> <a href="#" onclick="toggle_delay(this); return false;">Gummy Bears</a> in 57 days. <div class="modify_subscription"> <form accept-charset="UTF-8" action="/subscriptions/8/change" class="edit_subscription" id="edit_subscription_8" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="&#x2713;" /><input name="_method" type="hidden" value="put" /><input name="authenticity_token" type="hidden" value="s5LJffuzmbEMkSrez8b3KLVmDWN/PGmDryXhp25+qc4=" /></div> <label for="q">Days to delay:</label> <input id="query" name="query" type="text" /> <input id="always" name="always" type="checkbox" value="1" /> <label for="always">Apply delay to all future orders</label> <input name="commit" type="submit" value="Change" /> </form> <a href="/subscriptions/8" data-confirm="Are you sure?" data-method="delete" rel="nofollow">Destroy</a> </div> </div> And that's a problem because now no matter which "Apply delay to all future orders" I select it always very helpfully checks the first box for me. How can I override the ID without doing something ugly and un-rails-like?

    Read the article

  • Varnish 3.0.2 and ISPConfig 3.0.4

    - by Warren Bullock III
    I followed the tutorial The Perfect Server - Ubuntu 11.10 [ISPConfig 3] here. I'm running an Ubuntu 11.04 (Natty Narwhal) server with 1024 RAM on Rackspace. I've gone through and updated to ISPConfig 3.0.4. Everything has been working great up to now when I decided to try and install Varnish. Initially I did an install of Varnish by issuing: apt-get update apt-get upgrade apt-get install varnish Apparently the version that was installed was Varnish 2.x so I went back and added the repositories for packages provided by varnish-cache.org curl http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add - echo "deb http://repo.varnish-cache.org/ubuntu/ lucid varnish-3.0" >> /etc/apt/sources.list apt-get update apt-get install varnish This updated my version of Varnish to 3.0.2 I then proceeded to make the following changes: vim /etc/default/varnish change DAEMON_OPTS to port 80: vim /etc/apache2/ports.conf NameVirtualHost *:8000 Listen 8000 vim /etc/apache2/sites-available/default <VirtualHost *:8000> vim /etc/apache2/sites-available/ispconfig.vhost Listen 8080 NameVirtualHost *:8080 <VirtualHost _default_:8080> I then proceeded to set my other vhosts to use 8000 (the apache2 port) so with all this set I reset both Apache2 and Varnish to test. I used Firebug in Firefox 11.0 The output from what I see doesn't seem to indicate that Varnish is working completely correct: First of all I see: X-Varnish 1644834493 but I've heard that unless you have two timestamps side by side than it's probably not working correctly so for example I was thinking I might see something like: X-Varnish 1644834493 1644837493 Also if I noticed this in the output which seems to be inconstant: X-Drupal-Cache MISS There are times when it will say HIT as well.... So the question here that I have is I think Varnish is partially working, however, why don't I see two timestamps on X-Varnish like I'm thinking I should and does the output of the screenshot I have look correct? If Varnish isn't working can someone tell me what I might being doing wrong? Thanks in advance.

    Read the article

  • Problems setting NTP sever with w32tm for a DC that is a Hyper-V guest

    - by R.Tonheim
    Hello ! I have tried to sett my DC to get its time from several NTP severs. I follow this answer (http://serverfault.com/questions/24298/w32time-sync-problems-for-hyper-v-guests-w32time-event-ids-38-24-29-35/24299#24299) to do it. First I disable Time Synchronization in the Hyper-V Integration Services for each guest. Then restart the Windows Time serviceon the guest. I had before this used this command: w32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update And the cmd sad: The command completed successfully. But the time was still 10 min wrong... I run w32tm again after restarted the DC without it having any effect. The w32tm /query /status still say: "Source: Local CMOS Clock" FROM MY CMD: Microsoft Windows [Version 6.0.6002] Copyright (c) 2006 Microsoft Corporation. All rights reserved. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHGw32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update The command completed successfully. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHG

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • How do I renew an expired Ubuntu OpenLDAP SSL Certificate

    - by Doug Symes
    We went through the steps of revoking an SSL Certificate used by our OpenLDAP server and renewing it but we are unable to start slapd. Here are the commands we used: openssl verify hostname_domain_com_cert.pem We got back that the certificate was expired but "OK" We revoked the certificate we'd been using: openssl ca -revoke /etc/ssl/certs/hostname_domain_com_cert.pem Revoking worked fine. We created the new Cert Request by passing it the key file as input: openssl req -new -key hostname_domain_com_key.pem -out newreq.pem We generated a new certificate using the newly created request file "newreq.pem" openssl ca -policy policy_anything -out newcert.pem -infiles newreq.pem We looked at our cn=config.ldif file and found the locations for the key and cert and placed the newly dated certificate in the needed path. Still we are unable to start slapd with: service slapd start We get this message: Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldap:/// ldapi:/// ldaps:///' -g openldap -u openldap -F /etc/ldap/slapd.d/ Here is what we found in /var/log/syslog Oct 23 20:18:25 ldap1 slapd[2710]: @(#) $OpenLDAP: slapd 2.4.21 (Dec 19 2011 15:40:04) $#012#011buildd@allspice:/build/buildd/openldap-2.4.21/debian/build/servers/slapd Oct 23 20:18:25 ldap1 slapd[2710]: main: TLS init def ctx failed: -1 Oct 23 20:18:25 ldap1 slapd[2710]: slapd stopped. Oct 23 20:18:25 ldap1 slapd[2710]: connections_destroy: nothing to destroy. We are not sure what else to try. Any ideas?

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • SuperMicro BMC on OpenSuSE Linux --cannot access from LAN

    - by Kendall
    Hi, I have an (old) SMC-001 IPMI device on an (old) X6DVL-EG2 motherboard. My problem is that I cannot access the BMC from LAN. I'm also getting some interesting output from ipmitool. First, the setup. I enable Console Redirection in the BIOS, turn BIOS Redirection after POSt to "disabled". I then modprobe'ed for ipmi_msghandler, ipmi_devintf and ipmi_si. I then found ipmi0 under /dev. So far so good. Since I want console redirection over serial, I modified /boot/grub/menu.lst: http://pastebin.com/YYJmhusQ I then modified "/etc/inittab" as follows: S1:12345:respawn:/sbin/agetty -L 19200 ttyS1 ansi Networking I set as following, using "ipmitool" ipaddr: 192.168.3.164 netmask: 255.255.255.0 defgw: 192.168.3.1 The above are correct for my environment. To test it I do: ipmitool -I open chassis power off which responds by powering off the machine. When I to access from another computer on the network, however, I get an error message: host# ipmitool -I lanplus -H 192.168.10.164 -U Admin -a chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status "Admin" seems to be a valid user name: host# ipmitool -I open user list 1 2 Admin true false true USER The interesting output from ipmitool I initially mentioned: host # ipmitool -I open lan set 1 access on Set Channel Access for channel 1 failed: Request data field length limit exceeded Also, newload4:/home/gjones # ipmitool channel info 1 Channel 0x1 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : session-less Active Session Count : 0 Protocol Vendor ID : 7154 Get Channel Access (volatile) failed: Request data field length limit exceeded The output of "ipmitool -I open lan print 1" is here: http://pastebin.com/UZyL6yyE Any help/suggestions is greatly appreciated; I've been working with this thing for a few hours now with no success.

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • 'pskill \\hostname winlogon' might budge a server "stuck rebooting", but why?

    - by Snoi
    Question: Executing remote (Sysinternals) command... pskill \\machine winlogon ...can budge a server that is stuck rebooting, but how/why does this work? How do you know which service to kill? To recreate (e.g.): You run Windows Update, allow a reboot, and ...NOTHING! RDP gets cut off but the server does not reboot. Just about every other service seems to stay up. Further Background: I've faced this problem on VMs hosted around the planet for some years, and used various sc.exe and shutdown commands to learn the state of and attempt remote reboot of servers in such a state, with limited success. Most datacentres don't offer any way to see the true console or power off/on such machines. They charge $$ for you to call them to do such simple things after hours, when you nearly always have to run your maint tasks. e.g. NET USE \\machine\IPC$ /USER:login password sc \\machine query RpcSs sc \\machine query TermService sc \\machine query wuauserv tasklist /s machine This occasionally works for me... shutdown /m \\machine /r /f /t: 0 ...but more often than not it fails with: A system shutdown is in progress (1115). I found this question, and the answer by @Tweek, and it worked really well, but was I just lucky? Can not RDP to Win 2003 box or initiate remote restart @Tweek said to run: pskill \\hostname winlogon ...and that got me past this situation in a new way (Server 2008 R2 in my most recent case) - really useful! I just need to understand if I got lucky or there is more science here. What I'd like to know is why the winlogon process? @Livne said to use "tasklist /s HostName" to see what is the culprit, but how do you tell from the listed output? It's just a list of running tasks etc. From that I would not know what to look for, nor could I see anything about the winlogon process that suggested to my eyes that was the one to kill.

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records? Additional information: The query works fine from tsql. I only get this error through Perl's DBD::Sybase interface. (Should I test through something else? I don't have the expertise yet to install PHP or Python. I've got jTDS and can use it, but I think that's a completely different implementation, not an interface to FreeTDS.) Adding client charset = UTF-8 to my freetds.conf file results in "Out of memory!" printed to STDERR.

    Read the article

  • ISCSI Target Ubuntu

    - by erai
    I'm trying to setup iscsitarget on Ubuntu 12.04 but I can't connect to it. On the windows machine it says Target Error. with no other output. My ietd.conf is Target iqn.2012-06.com.org:virtual_machines.lun Lun 0 Type=fileio,Path=/media/volume0/storlun0.bin When I run iscsiadm -m discovery -t st -p localhost The output is iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 closed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: connection login retries (reopen_max) 5 exceeded iscsiadm: Could not perform SendTargets discovery. dmesg output: [ 3324.804665] iscsi_trgt: Removing all connections, sessions and targets [ 3325.875343] iSCSI Enterprise Target Software - version 1.4.20.3 [ 3325.875415] iscsi_trgt: Registered io type fileio [ 3325.875420] iscsi_trgt: Registered io type blockio [ 3325.875425] iscsi_trgt: Registered io type nullio

    Read the article

  • How can i link a oracle user to a business objects user

    - by Robert Speckmann
    I have a problem with linking the oracle user to a business objects user. I will try to explain it as detailed as possible; I have a Oracle database (10g) where a couple of users are defined. These users can query on information with application X. Those records will then be written into the oracle database. The records that is written into the database has a ID that links to the person that has run the query. I also have a active directory in wich a couple of users are made; testuser1, testuser2. When those users log on, and want to load a report in Business Objects XI i want them to see the information that was created when the report was activated by that same user that had runned the query before with application X. The name of the person in the active directory and the name in the oracle database are not the same but i dont think that would be a problem in this stage. So the steps i took: First, i run a report in application X (with a account prodpim_rs) wich fills my Oracle database with a record. The second step is logging on as testuser1 (from the AD) and then login on Business Objects XI with the account. Now i want to load a report with the information in my Oracle database. So the prodpim_rs user and the testuser must have a link between them. I am wondering how to forfill this. Can i link the account, wich is made in a Oracle database, with the user of BO wich is linked to my AD? Thank you in advance for your reply Robert

    Read the article

  • Run FTP session from bash script

    - by Adam Salkin
    I'm trying to write a BASH script to test if an FTP site that I own is running. I therefore want the bash script to connect to the FTP site, log in with a dummy account and redirect the output to a file that I can then grep to confirm that the login succeeded. (I know that putting user/pass in a file is not recommended, but this dummy account is chrooted to one empty directory and can't escape to the shell, and in any case I'm the only user who can login to a shell prompt.) I'm using the BASH shell on Ubuntu. I created a file called "ftp-dummy" which looks like this username password And I then did this from the prompt: adam$ ftp my.ftpsite.com < ftp-dummy This does not work - I don't see the normal welcome message and the output is: Password:Name (my.ftpsite.com:adam) : I tried removing the space between the < and the filename - same result. If I redirect the output to a testfile, the testfile shows: Name (my.ftpsite.com:adam): ?Invalid command And I still get a Password prompt on STDOUT I also tried using echo and get the same result: echo -e "username \npassword \n" | ftp my.ftpsite.com I don't see why I'm not seeing the normal welcome message or why the input is not being read from the file. Any help would be much appreciated. Thanks, Adam

    Read the article

  • SQL Server 2005 - Linked Visual Foxpro Authorization

    - by John
    Here's the Scenario: We have an existing SQL 2000 Server that has a linked server to a share directory (on another server) containing Visual FoxPro tables; all connections work correctly. Porting the SQL 2000 server to a new SQL 2005 server results in questionable behavior: If you connect to the server, remotely, using Windows Authentication, you receive this error when running a query against the linked server: OLE DB provider "MSDASQL" for linked server "[linked server name]" returned message "[Microsoft][ODBC Visual FoxPro Driver]File 'MyTable.dbf' does not exist.". Msg 7350, Level 16, State 2, Line 2 Cannot get the column information from OLE DB provider "MSDASQL" for linked server "[linked server name]". However, logged in locally, the query works fine. The query also works correctly when logged in remotely, but using a SQL login. The only scenario I receive the error is when connected remotely, using windows authentication. As I mentioned before, this works on the SQL 2000 server, and both the old and new servers are running under the same network account (which has access to the folder the FoxPro files are in). Doing a little searching on the internet it looks like others have run into this situation, but I haven't found a resolution. Has anyone run into this before?

    Read the article

  • nameserver spoiling avahi multicast name resolution of .local domain

    - by Doug Coburn
    After trying to ping a machine on my local network, I noticed that I was trying hit address 66.152.109.24. This is an external public address. Resolution should have occurred via avahi mDNS. I ran dig to see how the name resolution worked and my quest/centurylink name server was retuning results for my .local domain queries! I tried a random name and got the same ip address result. $ dig jakdafj.local ; <<>> DiG 9.8.1-P1-RedHat-9.8.1-3.P1.fc15 <<>> jakdafj.local ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58410 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;jakdafj.local. IN A ;; ANSWER SECTION: jakdafj.local. 10 IN A 66.152.109.24 jakdafj.local. 10 IN A 204.232.231.46 ;; Query time: 104 msec ;; SERVER: 205.171.3.25#53(205.171.3.25) ;; WHEN: Sat Mar 24 20:40:17 2012 ;; MSG SIZE rcvd: 63 Am I missing something or is my DNS name server at 205.171.3.25 corrupted?

    Read the article

  • PHP + MYSQL site perfomance

    - by Diego
    I have to manage a site which wasn't developed by me. It is in PHP using a mysql database, which is located in the web server. The site, sometimes (when the visitors increase too much) stops responding, or respond too slow. I have developed some sites in PHP but never took care of the management so really don't know where to start. The server (the hard) seems to be fine, when the web stops responding the cpu is being used at about 55% and has a lot of memory. I'm not asking someone to solve this issue. I only would really like if someone could give me a few tips about where can I find logs and how should I read and interpret them. So, that way I would be able to know if its the net traffic, the database (which queries), or what. Thanks! Update: Forgot to say: it is a Windows Server 2003. Note: I've recorded about a day with Jet Profiler. I don't really understand all the information it provides but there is one query which it marks as really slow. It makes sense because it is a select with a where clause which has three like condition. Initially I didn't include this in my question because when I run the query from MySQL Query Browser it doesn't take any long. It is under 0.01 seconds.

    Read the article

  • postgresql No space left on device

    - by pstanton
    Postgres is reporting that it is out of disk space while performing a rather large aggregation query: Caused by: org.postgresql.util.PSQLException: ERROR: could not write block 31840050 of temporary file: No space left on device at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304) at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:189) ... 8 more However the disk has quite a lot of space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 386G 123G 243G 34% / udev 5.9G 172K 5.9G 1% /dev none 5.9G 0 5.9G 0% /dev/shm none 5.9G 628K 5.9G 1% /var/run none 5.9G 0 5.9G 0% /var/lock none 5.9G 0 5.9G 0% /lib/init/rw The query is doing the following: INSERT INTO summary_table SELECT t.a, t.b, SUM(t.c) AS c, COUNT(t.*) AS count, t.d, t.e, DATE_TRUNC('month', t.start) AS month, tt.type AS type, FALSE, tt.duration FROM detail_table_1 t, detail_table_2 tt WHERE t.trid=tt.id AND tt.type='a' AND DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')>=23 OR DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')<13 GROUP BY month, type, t.a, t.b, t.d, t.e, FALSE, tt.duration any tips?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >