Search Results

Search found 604 results on 25 pages for 'bruno lee'.

Page 5/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Failing to upgrade to linux-image-3.0.0-26-generic

    - by Dan Lee
    When I try to upgrade linux-image-3.0.0-26-generic I get following problems: dpkg-deb (subprocess): data: internal bzip2 read error: 'DATA_ERROR' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/linux-image-3.0.0-26-generic_3.0.0-26.42_amd64.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./lib/modules/3.0.0-26-generic/kernel/drivers/scsi/fnic/fnic.ko' No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.0.0-26-generic /boot/vmlinuz-3.0.0-26-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.0.0-26-generic /boot/vmlinuz-3.0.0-26-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.0.0-26-generic_3.0.0-26.42_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.0.0-26-generic; however: Package linux-image-3.0.0-26-generic is not installed. I don't know why this happens to me; earlier upgrades always worked without problems. Does anybody know how to fix this?

    Read the article

  • Best S.E.O. practice for backlinking etc

    - by Aaron Lee
    I'm currently working on a website that I am really looking to optimise in terms of search engines, i've been submitting between 5-20 directory submissions daily, i've validated and optimised my code and i've joined a lot of forums etc to speak of the website in question, however, I don't seem to be making much of an impact in terms of Google. I know that S.E.O. takes a while to start making an impact, and that Google prefers sites that a regularly updated and aged, but are there any more practices that can really help with organic results in Search engines. I have looked on Google itself, and a few other SE's but nobody is willing to talk about extensive S.E.O. practices as they normally don't want people knowing their formula's for S.E.O., also does anyone know of a decent piece of software that really looks into the in's and out's of your page and provides feedback, I usually use http://www.woorank.com, but only using one program doesn't show if it's exactly correct in what it's saying. If anyone could help it would be much appreciated, thank you very much.

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • No longer able to boot stuck in busybox shell

    - by Chris J. Lee
    I've installed win 7 and ubuntu 11.04. After a storm killed the power. i'm unable to boot. I'm stuck in the busybox shell (ash). Here's what happens when i boot: Bios loads Grub displays option to load: ubuntu ubuntu recovery memtest another memtest option win7 win 7 recovery I load Ubuntu This cause it to load and i see no normal ubuntu screen just the busybox shell I try loading ubuntu via fsck -l; and it returns me a /bin/sh not found error. I load windows 7 and i'm unable to boot. I get a blue screen of death I then load ubuntu recovery and i don't have any luck either. Any ideas where to go from here?

    Read the article

  • What partition to use to keep data files in Ubuntu?

    - by Martin Lee
    I have been using Ubuntu for a few years and usually my partition set up was the following: Ext3 or Ext4 partition for the system itself (20 GB); A 10 GB swap partition; a big FAT32 partition to store movies, photos, work stuff, etc. (depends on the capacity of the disk, but usually it is what is left from Ext3+Swap, currently it is more than 200 GB). Does this setup sound right? I am considering to switching to one big Ext3 partition now, because the problem with Fat32 in Ubuntu has not gone anywhere: for example, right now I can access my 'big' partition with a 'Data' label only through /media/_themes?END. Pretty strange name for a partition, isn't it? some Linux software fail to read/write on this partition. For example, if I want to play around with rebar and build/make/compile things on this FAT32 partition, it will always complain about permissions and won't work (the same goes for many other kinds of software); it is not stable, I can not refer to some files on this FAT32 partition, because after the next reboot it will be called not '_themes?END', but something else. On the other side I usually begin to run out of space on the Ext3 partition after a few months of usage. So, the question is - what is the best setup of partitions for an Ubuntu system? Should a FAT32 partition be used at all?

    Read the article

  • Installing gcc in Ubuntu 11.10

    - by Chi-Ping Lee
    I want to install gcc on my computer. To do this, I ran the following command: sudo apt-get install build-essential As this runs, it connects (or tries to connect) to the server tw.archive.ubuntu.com. But the server is not working. How can I fix this and get gcc installed? Note: the Taiwan mirror is down as of 2012-06-01 0352. See thread here. This pastebin contains the text of /etc/apt/sources.list, after changing from tw.archive.ubuntu.com to the main server.

    Read the article

  • What one feature available in other IDEs should be added to Xcode? [closed]

    - by Graham Lee
    This is inspired by Which features from other IDEs/editors you wish you have in Visual Studio? Xcode is a very different tool from Visual Studio, with a different feature set. While some of its capabilities are very mature (it had RAD UI layout in Interface Builder since before most other platforms), it lacks some features that e.g. Visual Studio or Eclipse provide. If you could request one feature to be added to Xcode, which would it be? How would that feature help you write better code, or write the same code faster?

    Read the article

  • Ubuntu 12.04 Touchscreen Calibration

    - by Lee
    I have a machine with Ubuntu 12.04 installed, with dual monitor, via VGA and DVI Interface. The monitor is one touch screen and the other one is regular LCD Monitor. The touch screen is made in China with some unknown brand, and I am using eGalax Driver. The touch screen is now detected and works, but i need to do some calibration since it does not correctly perform click on touch. The problem is, when I’m using xinput_calibrator, it shows 4 crosses to be clicked on, because I’m using dual monitor, the crosses is now show 2 on the touch screen (touchable) and the others on the other monitor which is regular non-touch monitor. Please help, thank you.

    Read the article

  • iptables mac address filtering not work

    - by Tony Lee
    I block every port default by ufw and add iptables rules like this: sudo iptables -A INPUT -p tcp --dport 1723 -m mac --mac-source 00:11:22:33:44:55 -j ACCEPT then I list iptables INPUT rules: sudo iptables -L INPUT --line-numbers Chain INPUT (policy DROP) num target prot opt source destination 1 ACCEPT udp -- anywhere anywhere udp dpt:domain 2 ACCEPT tcp -- anywhere anywhere tcp dpt:domain 3 ACCEPT udp -- anywhere anywhere udp dpt:bootps 4 ACCEPT tcp -- anywhere anywhere tcp dpt:bootps 5 ufw-before-logging-input all -- anywhere anywhere 6 ufw-before-input all -- anywhere anywhere 7 ufw-after-input all -- anywhere anywhere 8 ufw-after-logging-input all -- anywhere anywhere 9 ufw-reject-input all -- anywhere anywhere 10 ufw-track-input all -- anywhere anywhere 11 ACCEPT tcp -- anywhere anywhere tcp dpt:1723 MAC 00:11:22:33:44:55 but I can't visit my server:1723 Is there sth wrong? I use Ubuntu 11.10

    Read the article

  • PHP accessible shared content between two websites on the same VPS on different domains/IPs

    - by Lee Fentress
    I have two ecommerce websites, selling music digital downloads, on the same VPS, currently using cPanel/WHM (but thinking of switching to Virtualmin). They have separate domains and IPs of course. They both share from the same set of music files, so I have duplicate copies in each website directory, which takes up a lot of disk space. How might I go about sharing the same set of music files across both sites, allowing PHP access, so that it does not break my shopping cart's functionality of serving customers the downloads after they have paid for them? I thought of maybe using symlinks or something, but I don't know if it's possible, or if it would have to somehow circumvent built-in security features of the server. I'm new to VPS management.

    Read the article

  • Has anyone used Sproutcore?

    - by Sam Lee
    Has anyone used Sproutcore for a web application? If so, can you give me a description of your experience? I am currently considering it, but I have a few concerns. First, the documentation is bad/incomplete, and I'm afraid that I'll spend lots of time figuring things out or digging through source code. Also, I'm a bit hesitant to use a project that is relatively new and could undergo significant changes. Any thoughts from people who have developed in Sproutcore are appreciated! EDIT/PS: Yes, I've seen this post: http://stackoverflow.com/questions/370598/sproutcore-and-cappuccino . However I'm interested in a bit lengthier description of Sproutcore itself from someone who's used it for a significant project.

    Read the article

  • What do you call the process of converting line breaks into html elements?

    - by Ben Lee
    On sites with user-created content (such as programmers SE) or blogging software back-ends, line breaks entered by the user in the content area are frequently converted into <br> and/or <p> tags when rendered on the front-end. For example, this: A limerick There once was a man from Nantucket Who kept all his cash in a bucket. Might render html like this: <p> A limerick </p> <p> There once was a man from Nantucket<br> Who kept all his cash in a bucket. </p> What is the standard name for this process of converting line breaks into html?

    Read the article

  • What is the best way to deal with 404s that are all trying to point to the same page that are from an external site?

    - by Lee
    I started getting 404s showing up in my Google Webmaster's Tools from a site linking to a specific category but with odd characters at the end of the url. So Something like this: http://example.com/category/puppies%EF%BC%9A.textwidget%E8%A6%81%E7%B4%A0%E7%B7%A8%E9%9B%86 Google Webmaster says that there are about 120 of these links and I can imagine there will be more to come. What is the best way to handle these links from an seo point-of-view? I have heard 301 redirecting too many links at one time can cause Google to ding the site but I don't want this site to continue posting broken links. Any help on this would be appreciated.

    Read the article

  • About cdn architecture to route way

    - by Tony Lee
    Our web system, use the third-party cdn service. Assume that the user set the local dns with the googledns or opendns to visit our web sites, so cdn service will select the closest cdn proxy node. all right, but in fact the user's actual access position might outside there, cdn service may chose the one furthest away from the user node, so static resource access slower.. At present, my idea is if user local set dns server with googledns, and then first one we get the actual ip address of the user, tracerote to test a best routing lines, set up a cookie in user browser, and then set 302 header for response to jump to the which best cdn node. Whether the user's browser side traceroute tool can provide the best route decision-making ? Because we find that, once the user to set local dns server with the foreign network segment, for example : set dns with 8.8.8.8, so cdn routing will choose the foreign service node.

    Read the article

  • Why is my Workspace switcher view skewed

    - by Lee
    I have been using Workspace switcher in Ubuntu just fine but recently have encountered this problem. The windows in the switcher don't fill the screen. I must have pressed some combination of buttons somehow but can't find any information anywhere in regards to resizing them. As you can see in the screen shot it looks like a perspective view or something. http://i1115.photobucket.com/albums/k553/lmt337/Screenshotfrom2012-07-04103519.png I should also add I have a dual monitor setup and nvidia graphics. The switcher still works but the fact the screens dont fit my actual screens is driving me nuts. Thanks in advance for any help.

    Read the article

  • WCF Service error on IIS with metadata

    - by Bruno Silva
    Hi, I'm trying to publish a service to IIS, it builds and runs OK on the ASP.NET dev server. When running in IIS I can get to the metadata by navigating to the service or by adding service reference in Visual Studio. But when I call a method from my client app it crashes with a internal server error. So I went to the Event Log and found this: WebHost failed to process a request. Sender Information: System.ServiceModel.Activation.HostedHttpRequestAsyncResult/8810861 Exception: System.Web.HttpException (0x80004005): There was no channel actively listening at 'http://mysite.net/soundhubservice.svc/$metadata'. This is often caused by an incorrect address URI. Ensure that the address to which the message is sent matches an address on which a service is listening. ---> System.ServiceModel.EndpointNotFoundException: There was no channel actively listening at 'http://mysite.net/soundhubservice.svc/$metadata'. This is often caused by an incorrect address URI. Ensure that the address to which the message is sent matches an address on which a service is listening. at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) Process Name: w3wp Process ID: 1080 My Web.Config looks something like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <services> <service name="SoundHub.Services.SoundHubService" behaviorConfiguration="StreamingServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost/SoundHubServive"/> </baseAddresses> </host> <endpoint address="service" binding="basicHttpBinding" bindingConfiguration="httpBuffering" contract="SoundHub.Services.ISoundHubService"/> <endpoint address="stream" binding="basicHttpBinding" bindingConfiguration="HttpStreaming" contract="SoundHub.Services.ISoundHubStreamService"/> <!--<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />--> </service> </services> <bindings> <basicHttpBinding> <binding name="HttpStreaming" maxReceivedMessageSize="67108864" transferMode="Streamed"/> <binding name="httpBuffering" transferMode="Buffered" /> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="StreamingServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration> Tried several combinations of settings I found while searching online but nothing helped, always the same error. Thanks Bruno

    Read the article

  • ArchBeat Link-o-Rama for December 4, 2012

    - by Bob Rhubart
    Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups | The Old Toxophilist "Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers," says The Old Toxopholist, "there are cases where it is extremely important that vServers run on distinct physical compute nodes." There's plenty more on the subject in his blog post. Oracle Endeca (2.3) Record Level Security | Adam Seed Adam Sneed's blog post covers "the basics of security within Endeca Information Discovery, as these basic security objects are required in order to explain the implementation of record level security." ODI Handling DQ | Gurcan Orhan Oracle ACE Director Gurcan Orhan suggests you have fun with these scripts for Oracle Data Integrator. Parleys Testimonial at GlassFish Community Event - JavaOne 2012 Video of Parley's webmaster Stephan Janssen's presentation at the GlassFish Community Event at JavaOne 2012, in which he explains why Parley's moved from Tomcat to GlassFish. Java Spotlight Episode 109: Pete Muir on CDI 1.1 This edition of Roger Brinkley's Java Spotlight Podcast features an interview with CDI 1.1 spec lead Pete Muir of JBoss/Red Hat. Muir talks about the features in CDI 1.1 and what to expect in the future. Webcast: Java Management Extensions with Oracle WebLogic Server 12c Dr. Frank Munz and Dave Cabelus do the talking in this on-demand webcast focused on Oracle WebLogic Server 12c with Java Management Extensions (JMX). Using the Coherence API to get Portable Object Format bytes | Bruno Borges Bruno Borges shares a code snippet that illustrates how easy it is to use the Coherence API. Thought for the Day "Experience is something you don't get until just after you need it." — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Apache is reponding a blank white page

    - by Bruno Araujo
    I have the following situation: A site hosted in apache 2.4, with ssl, that works like a charm for a while now, but out of no where, without modifications to the site, apache started serving random blank pages. The workaround this is to delete the cookies of the browser or restart the browser. I've switched the vitualhost to log in debug mode but it didn't got me anywhere. Here is the debug log of a failed page load: [Wed Oct 24 10:57:35.762547 2012] [ssl:info] [pid 27854:tid 140617706374912] [client 192.168.10.150:58917] AH01964: Connection to child 147 established (server xxx.com.br:443) [Wed Oct 24 10:57:35.762739 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1966): [client 192.168.10.150:58917] AH02043: SSL virtual host for servername xxx.com.br found [Wed Oct 24 10:57:35.777479 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1899): [client 192.168.10.150:58917] AH02041: Protocol: TLSv1, Cipher: DHE-RSA-AES256-SHA (256/256 bits) [Wed Oct 24 10:57:35.779912 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(243): [client 192.168.10.150:58917] AH02034: Initial (No.1) HTTPS request received for child 147 (server xxx.com.br:443) [Wed Oct 24 10:57:35.780044 2012] [authz_core:debug] [pid 27854:tid 140617706374912] mod_authz_core.c(809): [client 192.168.10.150:58917] AH01628: authorization result: granted (no directives) [Wed Oct 24 10:57:40.783950 2012] [ssl:info] [pid 27854:tid 140617706374912] (70007)The timeout specified has expired: [client 192.168.10.150:58917] AH01991: SSL input filter read failed. [Wed Oct 24 10:57:40.784077 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_io.c(988): [remote 192.168.10.150:58917] AH02001: Connection closed to child 147 with standard shutdown (server xxx.com.br:443)

    Read the article

  • Archiving to Tape

    - by Bruno
    This is not about backups, this is about archiving. For arguments sake lets say I have 2TB 7z file that I would like to archive to tape. I have 4 LTO-5 tapes ( 1.5TB each ). This may be a stupid question but what set up would I need that would allow me to drag and drop those files directly onto those 2 tapes and would automatically split the file accross 2 tapes like so: ------------------ | Copy 1 | | 1.5TB | ------------------ ------------------ | Copy 1 | | 0.5TB | ------------------ ------------------ | Copy 2 | | 1.5TB | ------------------ ------------------ | Copy 2 | | 0.5TB | ------------------ I just want to be able to specify which files go on which tapes as oppose to backups where the tapes just rotate. Thanks.

    Read the article

  • Disable SMS Syncing on Outlook

    - by Bruno Brant
    I've paired up my Samsung phone with an Outlook Exchange inbox (probably Outlook 2010), and now I've got the outstanding SMS syncing feature. Only, of course, it sucks, since my inbox get's flooded with SMS's that I already have on my phone. After looking around the internet for quite a while for a option that would allow me to disable that kind of syncing, the only guide I got was designed for Windows Phone 6.x. I want desperately to disable it. Anyone has any clue of how? I can't really believe that MS has forgotten to include the option. I've already looked at this question (Filter rule for SMS / text messages in exchange active sync (SMS sync)), and while it might help me, that's not what I'm looking for.

    Read the article

  • Registering publicly Mail server and Web server in a free dns server

    - by Bruno Vieira
    I'm trying to host the e-mails and the site of our company into our private server. I've already followed the Gentoo Virtual Mailhosting System with Postfix Guide and my mail server is working (actually it sends mails for the local users and for external users it goes to spam) and know how to set an Apache 2 server. What I don't know (and I mean really don't) is how to make them public. I did some research and found that I should ask my ISP to change the reverse DNS to my company domain in order to prevent my mails to be marked as spam, they are doing. I already know I have to configure a DNS Server, it seems like my register provider already has one but I don't know how I can configure CNET, A, MX, TXT and all those tags (Is it tags the name?) and If I must do some other configuration on my server. My Server: Linux mail 3.2.21-gentoo #1 SMP My /etc/hosts: 127.0.0.1 mail.example.com.br example example.com.br ::1 mail.example.com.br mail example.com.br My /etc/conf.d/hostname: hostname ="mail" What am I missing? If there's a guide about how to configure I would really be grate. Thanks in advance for the help. Cheers

    Read the article

  • Ubuntu and Windows 8 shared partition gets corrupted

    - by Bruno-P
    I have a dual boot (Ubuntu 12.04 and Windows 8) system. Both systems have access to an NTFS "DATA" partition which contains all my images, documents, music and some application data like Chrome and Thunderbird Profiles which used by both OS. Everything was working fine in my Dual boot Ubuntu/Windows 7, but after updating to Windows 8 I am having a lot of troubles. First, sometimes, I add some files from Ubuntu into my DATA partition but they don't show up in Windows. Sometimes, I can't even use the DATA partition from Windows. When I try to save a file it gives an error "The directory or file is corrupted or unreadable". I need to run checkdisk to fix it but after some time, same error appears. Before upgrading to Windows 8 I also installed a new hard drive and copied the old data using clonezilla (full disk clone). Here is the log of my last chkdisk: Chkdsk was executed in read/write mode. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is DATA. CHKDSK is verifying files (stage 1 of 3)... Deleted corrupt attribute list entry with type code 128 in file 67963. Unable to find child frs 0x12a3f with sequence number 0x15. The attribute of type 0x80 and instance tag 0x2 in file 0x1097b has allocated length of 0x560000 instead of 0x427000. Deleted corrupt attribute list entry with type code 128 in file 67963. Unable to locate attribute with instance tag 0x2 and segment reference 0x1e00000001097b. The expected attribute type is 0x80. Deleting corrupt attribute record (128, "") from file record segment 67963. Attribute record of type 0x80 and instance tag 0x3 is cross linked starting at 0x2431b2 for possibly 0x20 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x3 in file 0x1791e is already in use. Deleting corrupt attribute record (128, "") from file record segment 96542. Attribute record of type 0x80 and instance tag 0x4 is cross linked starting at 0x6bc7 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x4 in file 0x17e83 is already in use. Deleting corrupt attribute record (128, "") from file record segment 97923. Attribute record of type 0x80 and instance tag 0x4 is cross linked starting at 0x1f7cec for possibly 0x5 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x4 in file 0x17eaf is already in use. Deleting corrupt attribute record (128, "") from file record segment 97967. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x441bd7f for possibly 0x9 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x32085 is already in use. Deleting corrupt attribute record (128, "") from file record segment 204933. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4457850 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x320be is already in use. Deleting corrupt attribute record (128, "") from file record segment 204990. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4859249 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3726b is already in use. Deleting corrupt attribute record (128, "") from file record segment 225899. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x485d309 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3726c is already in use. Deleting corrupt attribute record (128, "") from file record segment 225900. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48a47de for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37286 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225926. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48ac80b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37287 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225927. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48ae7ef for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37288 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225928. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48af7f8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3728a is already in use. Deleting corrupt attribute record (128, "") from file record segment 225930. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48c39b6 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37292 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225938. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x495d37a for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x372d7 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226007. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d0bd38 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x372dc is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226012. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4c2d9bc for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x372ed is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226029. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4a4c1c3 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37354 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226132. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4a8e639 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37376 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226166. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4a8f6eb for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37379 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226169. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ae1aa8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37391 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226193. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4b00d45 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x37396 is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226198. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4b02d50 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3739c is already in use. Deleting corrupt attribute record (128, "") from file record segment 226204. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4b3407a for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373a8 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226216. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4bd8a1b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373db is already in use. Deleting corrupt attribute record (128, "") from file record segment 226267. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4bd9a28 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373dd is already in use. Deleting corrupt attribute record (128, "") from file record segment 226269. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4c2fb24 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373f3 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226291. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cb67e9 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37424 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226340. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cba829 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37425 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226341. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cbe868 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37427 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226343. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cbf878 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37428 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226344. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cc58d8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742a is already in use. Deleting corrupt attribute record (128, "") from file record segment 226346. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ccc943 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742b is already in use. Deleting corrupt attribute record (128, "") from file record segment 226347. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd199b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742d is already in use. Deleting corrupt attribute record (128, "") from file record segment 226349. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd29a8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742f is already in use. Deleting corrupt attribute record (128, "") from file record segment 226351. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd39b8 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37430 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226352. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd49c8 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37432 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226354. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd9a16 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37435 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226357. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cdca46 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37436 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226358. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ce0a78 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37437 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226359. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ce6ad9 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743a is already in use. Deleting corrupt attribute record (128, "") from file record segment 226362. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cebb28 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743b is already in use. Deleting corrupt attribute record (128, "") from file record segment 226363. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ceeb67 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743d is already in use. Deleting corrupt attribute record (128, "") from file record segment 226365. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cf4bc6 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743e is already in use. Deleting corrupt attribute record (128, "") from file record segment 226366. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cfbc3a for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37440 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226368. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cfcc48 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37442 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226370. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d02ca9 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37443 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226371. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d06ce8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37444 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226372. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d9a608 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x37449 is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226377. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d844ab for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x3744b is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226379. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d6c32b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x3744c is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226380. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d2af25 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x3744e is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226382. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d0fd78 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37451 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226385. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d16ef8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x8 Can anyone help? Thank you

    Read the article

  • RabbitMQ message consumers stop consuming messages

    - by Bruno Thomas
    Hi server fault, Our team is in a spike sprint to choose between ActiveMQ or RabbitMQ. We made 2 little producer/consumer spikes sending an object message with an array of 16 strings, a timestamp, and 2 integers. The spikes are ok on our devs machines (messages are well consumed). Then came the benchs. We first noticed that somtimes, on our machines, when we were sending a lot of messages the consumer was sometimes hanging. It was there, but the messsages were accumulating in the queue. When we went on the bench plateform : cluster of 2 rabbitmq machines 4 cores/3.2Ghz, 4Gb RAM, load balanced by a VIP one to 6 consumers running on the rabbitmq machines, saving the messages in a mysql DB (same type of machine for the DB) 12 producers running on 12 AS machines (tomcat), attacked with jmeter running on another machine. The load is about 600 to 700 http request per second, on the servlets that produces the same load of RabbitMQ messages. We noticed that sometimes, consumers hang (well, they are not blocked, but they dont consume messages anymore). We can see that because each consumer save around 100 msg/sec in database, so when one is stopping consumming, the overall messages saved per seconds in DB fall down with the same ratio (if let say 3 consumers stop, we fall around 600 msg/sec to 300 msg/sec). During that time, the producers are ok, and still produce at the jmeter rate (around 600 msg/sec). The messages are in the queues and taken by the consumers still "alive". We load all the servlets with the producers first, then launch all the consumers one by one, checking if the connexions are ok, then run jmeter. We are sending messages to one direct exchange. All consumers are listening to one persistent queue bounded to the exchange. That point is major for our choice. Have you seen this with rabbitmq, do you have an idea of what is going on ? Thank you for your answers.

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >