Search Results

Search found 6103 results on 245 pages for 'logical tree'.

Page 200/245 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • How to return function in document ready?

    - by FatDogMark
    I don't know if it is possible to do this, like I have 2 js files. The first Js File: var news_pos, about_pos, services_pos, clients_pos; function define_pos(){ var $top_slides=$('#top_slides'), $mid_nav=$('#mid_nav'), $news_section=$('#section-1'); var fixed_height = $top_slides.height()+$mid_nav.height(); news_pos = fixed_height-20 ; about_pos = fixed_height+$news_section.height(); services_pos = fixed_height+$news_section.height()*2; clients_pos = fixed_height+$news_section.height()*3; } $(document).ready(function(){ var section_news = $('#section-1'), section_about = $('#section-2'), section_services = $('#section-3'), section_clients = $('#section-4'); setheight(); function setheight(){ var section_height=$(window).height()+200; $section_news.height(section_height); $section_about.height(section_height); $section_services.height(section_height); $section_clients.height(section_height); define_pos(); } }); The second JS File: $(document).ready(function(){ var nav = { '$btn1':$('#btn1'), '$btn2':$('#btn2'), '$btn3':$('#btn3'), '$btn4':$('#btn4'), '$btn5':$('#btn5'), myclick : function(){ myclicked(nav.$btn1,0); myclicked(nav.$btn2,news_pos); myclicked(nav.$btn3,about_pos); myclicked(nav.$btn4,services_pos); myclicked(nav.$btn5,clients_pos); function myclicked(j,k){ j.click(function(e) { e.preventDefault(); $('html,body').animate({scrollTop: k}, 1000); }); } //Is it right to do return{'myclick':myclick}, how to call? seems not logical } } nav.myclick(); // Here will not work because it say news_pos is undefined //if I use setTimeout(nav.myclick,1000), it will works but I want to run it right the when position is caculated. }); How do I pass the nav.myclick() function to the frist js file and put it in setheight() and under define_pos()? By the way writing codes right in stackoverflow is strange,press tab not really give you any spacing.

    Read the article

  • Please explain this delete top 100 SQL syntax

    - by Patrick
    Basically I want to do this: delete top( 100 ) from table order by id asc but MS SQL doesn't allow order in this position The common solution seems to be this: DELETE table WHERE id IN(SELECT TOP (100) id FROM table ORDER BY id asc) But I also found this method here: delete table from (select top (100) * from table order by id asc) table which has a much better estimated execution plan (74:26). Unfortunately I don't really understand the syntax, please can some one explain it to me? Always interested in any other methods to achieve the same result as well. EDIT: I'm still not getting it I'm afraid, I want to be able to read the query as I read the first two which are practically English. The above queries to me are: delete the top 100 records from table, with the records ordered by id ascending delete the top 100 records from table where id is anyone of (this lot of ids) delete table from (this lot of records) table I can't change the third one into a logical English sentence... I guess what I'm trying to get at is how does this turn into "delete from table (this lot of records)". The 'from' seems to be in an illogical position and the second mention of 'table' is logically superfluous (to me).

    Read the article

  • What is the best practice when using UIStoryboards?

    - by Scott Sherwood
    Having used storyboards for a while now I have found them extremely useful however, they do have some limitations or at least unnatural ways of doing things. While it seems like a single storyboard should be used for your app, when you get to even a moderately sized application this presents several problems. Working within teams is made more difficult as conflicts in Storyboards can be problematic to resolve (any tips with this would also be welcome) The storyboard itself can become quite cluttered and unmanageable. So my question is what are the best practices of use? I have considered using a hybrid approach having logical tasks being split into separate storyboards, however this results in the UX flow being split between the code and the storyboard. To me this feels like the best way to create reusable actions such as login actions etc. Also should I still consider a place for Xibs? This article has quite a good overview of many of the issues and it proposes that for scenes that only have one screen, xibs should be used in this case. Again this feels unusual to me with Apples support for instantiating unconnected scenes from a storyboard it would suggest that xibs won't have a place in the future but I could be wrong.

    Read the article

  • Should I Teach My Son Programming? What approaches should I take? [closed]

    - by DaveDev
    I was wondering if it's a good idea to teach object oriented programming to my son? I was never really good at math as a kid, but I think since I've started programming it's given me a greater ability to understand math by being better able to visualise relationships between abstract models. I thought it might give him a better advantage in learning & applying logical & mathematical concepts throughout his life if he was able to take advantage of the tools available to programmers. what would be the best programming fields, techniques and/or concepts? What approach should I take? what concepts should I avoid? what fields of mathematics would he find this benfits him most? He's only 2 now so it wouldn't be for another few years before I do this, (and even at that, only from a very high level point of view). I thought I'd put it to the programming community and see what you guys thought? Possible Duplicate: What are some recommended programming resources for pre-teens?

    Read the article

  • Are Interfaces "Object"?

    - by PrashantGupta
    package inheritance; class A{ public String display(){ return "This is A!"; } } interface Workable{ public String work(); } class B extends A implements Workable{ public String work(){ return "B is working!"; } } public class TestInterfaceObject{ public static void main(String... args){ B obj=new B(); Workable w=obj; //System.out.println(w.work()); //invoking work method on Workable type reference System.out.println(w.display()); //invoking display method on Workable type reference //System.out.println(w.hashCode()); // invoking Object's hashCode method on Workable type reference } } As we know that methods which can be invoked depend upon the type of the reference variable on which we are going to invoke. Here, in the code, work() method was invoked on "w" reference (which is Workable type) so method invoking will compile successfully. Then, display() method is invoked on "w" which yields a compilation error which says display method was not found, quite obvious as Workable doesn't know about it. Then we try to invoke the Object class's method i.e. hashCode() which yields a successful compilation and execution. How is it possible? Any logical explanation?

    Read the article

  • What are advantages of using a one-to-one table relationship? (MySQL)

    - by byronh
    What are advantages of using a one-to-one table relationship as opposed to simply storing all the data in one table? I understand and make use of one-to-many, many-to-one, and many-to-many all the time, but implementing a one-to-one relationship seems like a tedious and unnecessary task, especially if you use naming conventions for relating (php) objects to database tables. I couldn't find anything on the net or on this site that could supply a good real-world example of a one-to-one relationship. At first I thought it might be logical to separate 'users', for example, into two tables, one containing public information like an 'about me' for profile pages and one containing private information such as login/password, etc. But why go through all the trouble of using unnecessary JOINS when you can just choose which fields to select from that table anyway? If I'm displaying the user's profile page, obviously I would only SELECT id,username,email,aboutme etc. and not the fields containing their private info. Anyone care to enlighten me with some real-world examples of one-to-one relationships?

    Read the article

  • Build-Essentials installation failing

    - by Brickman
    I am having trouble accessing the several critical header files that show to be a part of the build process. The "Ubuntu Software Center" shows "Build Essentials" as installed: Next I did the following two commands, which did not improve the problem: ~$ sudo apt-get install build-essential [sudo] password for: Reading package lists... Done Building dependency tree Reading state information... Done build-essential is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. :~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. :~$ Dump of headers after installation attempts. > /usr/include/boost/interprocess/detail/atomic.hpp > /usr/include/boost/interprocess/smart_ptr/detail/sp_counted_base_atomic.hpp > /usr/include/qt4/Qt/qatomic.h /usr/include/qt4/Qt/qbasicatomic.h > /usr/include/qt4/QtCore/qatomic.h > /usr/include/qt4/QtCore/qbasicatomic.h > /usr/share/doc/git-annex/html/bugs/git_annex_unlock_is_not_atomic.html > /usr/src/linux-headers-3.11.0-15/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-15/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-15-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-17/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-17-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-18/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-18-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-19/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-19-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-20/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-20-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-22/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-22-generic/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.14.4-031404/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404-generic/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404-lowlatency/include/linux/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/alpha/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arm/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arm64/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/avr32/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/blackfin/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/frv/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/h8300/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/hexagon/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/ia64/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/m32r/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/m68k/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/metag/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/microblaze/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/mips/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/mn10300/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/parisc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/powerpc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/s390/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/score/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/sh/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/sparc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/tile/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/x86/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/xtensa/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/linux/atomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng/lib/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng/wrapper/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng-modules/lib/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng-modules/wrapper/ringbuffer/vatomic.h Yes, I know there are multiple headers of the same type here, but they are different versions. Version "linux-headers-3.14.4-031404" shows to be the latest. Ubuntu shows "Nothing needed to be installed." However, the following C/C++ headers files show to be missing for Eclipse and QT4. #include <linux/version.h> #include <linux/module.h> #include <linux/socket.h> #include <linux/miscdevice.h> #include <linux/list.h> #include <linux/vmalloc.h> #include <linux/slab.h> #include <linux/init.h> #include <asm/uaccess.h> #include <asm/atomic.h> #include <linux/delay.h> #include <linux/usb.h> This problem appears on my 32-bit version of Ubuntu and on both of my 64-bit versions. What I am doing wrong?

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • OBJ model loaded in LWJGL has a black area with no texture

    - by gambiting
    I have a problem with loading an .obj file in LWJGL and its textures. The object is a tree(it's a paid model from TurboSquid, so I can't post it here,but here's the link if you want to see how it should look like): http://www.turbosquid.com/FullPreview/Index.cfm/ID/701294 I wrote a custom OBJ loader using the LWJGL tutorial from their wiki. It looks like this: public class OBJLoader { public static Model loadModel(File f) throws FileNotFoundException, IOException { BufferedReader reader = new BufferedReader(new FileReader(f)); Model m = new Model(); String line; Texture currentTexture = null; while((line=reader.readLine()) != null) { if(line.startsWith("v ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.verticies.add(new Vector3f(x,y,z)); }else if(line.startsWith("vn ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.normals.add(new Vector3f(x,y,z)); }else if(line.startsWith("vt ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); m.texVerticies.add(new Vector2f(x,y)); }else if(line.startsWith("f ")) { Vector3f vertexIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[0]), Float.valueOf(line.split(" ")[2].split("/")[0]), Float.valueOf(line.split(" ")[3].split("/")[0])); Vector3f textureIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[1]), Float.valueOf(line.split(" ")[2].split("/")[1]), Float.valueOf(line.split(" ")[3].split("/")[1])); Vector3f normalIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[2]), Float.valueOf(line.split(" ")[2].split("/")[2]), Float.valueOf(line.split(" ")[3].split("/")[2])); m.faces.add(new Face(vertexIndicies,textureIndicies,normalIndicies,currentTexture.getTextureID())); }else if(line.startsWith("g ")) { if(line.length()>2) { String name = line.split(" ")[1]; currentTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/" + name + ".png")); System.out.println(currentTexture.getTextureID()); } } } reader.close(); System.out.println(m.verticies.size() + " verticies"); System.out.println(m.normals.size() + " normals"); System.out.println(m.texVerticies.size() + " texture coordinates"); System.out.println(m.faces.size() + " faces"); return m; } } Then I create a display list for my model using this code: objectDisplayList = GL11.glGenLists(1); GL11.glNewList(objectDisplayList, GL11.GL_COMPILE); Model m = null; try { m = OBJLoader.loadModel(new File("res/untitled4.obj")); } catch (Exception e1) { e1.printStackTrace(); } int currentTexture=0; for(Face face: m.faces) { if(face.texture!=currentTexture) { currentTexture = face.texture; GL11.glBindTexture(GL11.GL_TEXTURE_2D, currentTexture); } GL11.glColor3f(1f, 1f, 1f); GL11.glBegin(GL11.GL_TRIANGLES); Vector3f n1 = m.normals.get((int) face.normal.x - 1); GL11.glNormal3f(n1.x, n1.y, n1.z); Vector2f t1 = m.texVerticies.get((int) face.textures.x -1); GL11.glTexCoord2f(t1.x, t1.y); Vector3f v1 = m.verticies.get((int) face.vertex.x - 1); GL11.glVertex3f(v1.x, v1.y, v1.z); Vector3f n2 = m.normals.get((int) face.normal.y - 1); GL11.glNormal3f(n2.x, n2.y, n2.z); Vector2f t2 = m.texVerticies.get((int) face.textures.y -1); GL11.glTexCoord2f(t2.x, t2.y); Vector3f v2 = m.verticies.get((int) face.vertex.y - 1); GL11.glVertex3f(v2.x, v2.y, v2.z); Vector3f n3 = m.normals.get((int) face.normal.z - 1); GL11.glNormal3f(n3.x, n3.y, n3.z); Vector2f t3 = m.texVerticies.get((int) face.textures.z -1); GL11.glTexCoord2f(t3.x, t3.y); Vector3f v3 = m.verticies.get((int) face.vertex.z - 1); GL11.glVertex3f(v3.x, v3.y, v3.z); GL11.glEnd(); } GL11.glEndList(); The currentTexture is an int - it contains the ID of the currently used texture. So my model looks absolutely fine without textures: (sorry I cannot post hyperlinks since I am a new user) i.imgur.com/VtoK0.png But look what happens if I enable GL_TEXTURE_2D: i.imgur.com/z8Kli.png i.imgur.com/5e9nn.png i.imgur.com/FAHM9.png As you can see an entire side of the tree appears to be missing - and it's not transparent, since it's not in the colour of the background - it's rendered black. It's not a problem with the model - if I load it using Kanji's OBJ loader it works fine(but the thing is,that I need to write my own OBJ loader) i.imgur.com/YDATo.png this is my OpenGL init section: //init display try { Display.setDisplayMode(new DisplayMode(Support.SCREEN_WIDTH, Support.SCREEN_HEIGHT)); Display.create(); Display.setVSyncEnabled(true); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(1.0f, 0.0f, 0.0f, 1.0f); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LESS); GL11.glDepthMask(true); GL11.glEnable(GL11.GL_NORMALIZE); GL11.glMatrixMode(GL11.GL_PROJECTION); GLU.gluPerspective (90.0f,800f/600f, 1f, 500.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glEnable(GL11.GL_CULL_FACE); GL11.glCullFace(GL11.GL_BACK); //enable lighting GL11.glEnable(GL11.GL_LIGHTING); ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse).flip()); GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS,(int)material_shinyness); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT2, GL11.GL_POSITION,(FloatBuffer)temp.asFloatBuffer().put(lightPosition2).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_AMBIENT,(FloatBuffer)temp.asFloatBuffer().put(lightAmbient).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_SPECULAR,(FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_CONSTANT_ATTENUATION, 0.1f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_LINEAR_ATTENUATION, 0.0f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_QUADRATIC_ATTENUATION, 0.0f); GL11.glEnable(GL11.GL_LIGHT2); Could somebody please help me?

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • How to reinstall Mac OS X on OS X/Linux dual-boot system?

    - by strangeronyourtrain
    My setup: I have a MacBook Pro 5,5 with a Mac OS X Snow Leopard partition and a Linux partition. I use rEFIt to boot into Linux. I didn't use Boot Camp when I originally installed Linux; instead, I manually created the partition (with either Disk Utility in OS X or Gparted on a Linux live CD--I don't recall which one) and then installed Linux on it from a live CD. The problem: My OS X partition is corrupt, and I need to reinstall Snow Leopard. Since I installed rEFIt from within OS X, I'm concerned that wiping the OS X partition will prevent me from booting into my Linux partition. How can I do this without losing access to my Linux partition? Is it possible to install Snow Leopard on the partition I reserved for it, or will it automatically overwrite the entire drive? And if I do the fresh OS X install and then install rEFIt again, will it automatically recognize my Linux partition? Thanks for any tips! Specs: MacBook Pro 5,5 (Mid-2009); Snow Leopard 10.6.7/64-bit Sabayon Linux, 2.6.36 kernel EDIT/UPDATE: Thanks, but the situation has taken a more complicated turn: I tried to reinstall Snow Leopard from the DVD, but it refused to install onto my Mac partition, claiming: "The disk cannot be used to start up your computer." Disk Utility wouldn't let me resize the partition or create a new one, and it doesn't see my Linux partition. It only displays the two partitions "Macintosh HD" and Linux Swap. I can, however, see all the partitions from Linux. This is the partition table as shown in Gparted: And the output of "fdisk -l" is: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 409639 204819+ ee GPT /dev/sda2 409640 349590464 174590412+ af HFS / HFS+ /dev/sda3 483122745 488392064 2634660 82 Linux swap / Solaris /dev/sda4 * 349590465 483122744 66766140 83 Linux Partition table entries are not in disk order I wonder if this is because I originally partitioned my disk with Gparted instead of OS X's Disk Utility (at this point, I don't recall whether I used Gparted or Disk Utility). In any case, it doesn't seem safe to do any reformatting with Disk Utility now, as I'm afraid it will wipe sda2 ("Macintosh HD") as well as sda4 (my Linux partition). So... I'm hoping to find a solution that doesn't involve wiping my entire hard disk. Would it be safe/possible to use Gparted to erase sda2 ("Macintosh HD") and then use the Snow Leopard DVD to install OS X onto [I]just[/I] sda2 without touching the other partitions? Thanks for any insight!

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • Best Practise: DNS and VPN (with private network IPs)

    - by ribx
    I am trying to find the best solution for my DNS problem. We are running several services in our company that you can reach only over VPN. Other services, that are reachable through the internet got the domain ... At the moment all services inside the VPN network go by .local... These have an VPN IP of the private network 192.168.252.0/24. Clients reach from Linux over OSX to Windows. I can think of 4 possibilities to implement a DNS infrastructure: Most common: an internal DNS Server, that is pushed by the VPN. But this has several drawbacks: your DNS responses are limited to the speed of the VPN Connection and your own DNS server. Because of very complex websites, this can increase the time for a page to load quite a lot. Also: we have several VPNs that are not connected to each other and all of them have their own DNS server. Several DNS servers locally. These have to be configured by hand. And you have to use some third party tool like dnsmasq. If you start a DNS request, you ask your locally running DNS server, which decides which server to ask for which domain name. One college of mine uses such a solution with this OSX (I am sorry, I don't remember the name of the application). You use your domain hoster. Most of them have APIs available to manipulate your DNS entries. So you could pull your private network informations to your domain hoster. I am not sure whether they all accept private network IPs. But I guess there will be some problems in the same way as in number 4. The one we currently use, because it's for us the most logical choice: we forward the sub domain *.local.. to our own public DNS Server. This works quite good for some public DNS Servers like Google. But most ISPs do not forward the answers. Or don't do that always. Like my ISP sends me a positive result of the a DNS request of a *.local.. domain only every 10th time I make a nslookup. (Can someone explain this?) Here the real Question: Is there another solution we were not thinking about? Or: What of these methods do you use?

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • TreeView in Winforms and focus problem

    - by Marcus
    Hi, Can anyone please explain to my why the form in the code below gets out of focus when selecting a treenode in the tree? What should happen is that the form/button should get the focus when the tree disappears like the listview example but it doesn't. Code example: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace FocusTest { public partial class Form1 : Form { #region Generated /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { System.Windows.Forms.ListViewItem listViewItem1 = new System.Windows.Forms.ListViewItem("Item1"); System.Windows.Forms.ListViewItem listViewItem2 = new System.Windows.Forms.ListViewItem("Item2"); System.Windows.Forms.ListViewItem listViewItem3 = new System.Windows.Forms.ListViewItem("Item3"); System.Windows.Forms.TreeNode treeNode1 = new System.Windows.Forms.TreeNode("Node0"); System.Windows.Forms.TreeNode treeNode2 = new System.Windows.Forms.TreeNode("Node1"); System.Windows.Forms.TreeNode treeNode3 = new System.Windows.Forms.TreeNode("Node2"); this.button1 = new System.Windows.Forms.Button(); this.listView1 = new System.Windows.Forms.ListView(); this.button2 = new System.Windows.Forms.Button(); this.treeView1 = new System.Windows.Forms.TreeView(); this.SuspendLayout(); // // button1 // this.button1.Location = new System.Drawing.Point(12, 12); this.button1.Name = "button1"; this.button1.Size = new System.Drawing.Size(75, 23); this.button1.TabIndex = 0; this.button1.Text = "button1"; this.button1.UseVisualStyleBackColor = true; this.button1.Click += new System.EventHandler(this.button1_Click); // // listView1 // this.listView1.Items.AddRange(new System.Windows.Forms.ListViewItem[] { listViewItem1, listViewItem2, listViewItem3 }); this.listView1.Location = new System.Drawing.Point(12, 41); this.listView1.Name = "listView1"; this.listView1.Size = new System.Drawing.Size(121, 97); this.listView1.TabIndex = 1; this.listView1.UseCompatibleStateImageBehavior = false; this.listView1.Visible = false; this.listView1.SelectedIndexChanged += new System.EventHandler(this.listView1_SelectedIndexChanged); this.listView1.View = View.List; // // button2 // this.button2.Location = new System.Drawing.Point(310, 11); this.button2.Name = "button2"; this.button2.Size = new System.Drawing.Size(75, 23); this.button2.TabIndex = 2; this.button2.Text = "button2"; this.button2.UseVisualStyleBackColor = true; this.button2.Click += new System.EventHandler(this.button2_Click); // // treeView1 // this.treeView1.Location = new System.Drawing.Point(310, 41); this.treeView1.Name = "treeView1"; treeNode1.Name = "Node0"; treeNode1.Text = "Node0"; treeNode2.Name = "Node1"; treeNode2.Text = "Node1"; treeNode3.Name = "Node2"; treeNode3.Text = "Node2"; this.treeView1.Nodes.AddRange(new System.Windows.Forms.TreeNode[] { treeNode1, treeNode2, treeNode3}); this.treeView1.Size = new System.Drawing.Size(121, 97); this.treeView1.TabIndex = 3; this.treeView1.Visible = false; this.treeView1.AfterSelect += new System.Windows.Forms.TreeViewEventHandler(this.treeView1_AfterSelect); // // Form1 // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.ClientSize = new System.Drawing.Size(760, 409); this.Controls.Add(this.treeView1); this.Controls.Add(this.button2); this.Controls.Add(this.listView1); this.Controls.Add(this.button1); this.Name = "Form1"; this.Text = "Form1"; this.ResumeLayout(false); } #endregion private System.Windows.Forms.Button button1; private System.Windows.Forms.ListView listView1; private System.Windows.Forms.Button button2; private System.Windows.Forms.TreeView treeView1; #endregion public Form1() { InitializeComponent(); } #region TreeView private void button2_Click(object sender, EventArgs e) { ToggleTreeView(); } private void ToggleTreeView() { if (treeView1.Visible) { Controls.Remove(treeView1); treeView1.Visible = false; } else { Controls.Add(treeView1); treeView1.Size = new Size(300, 400); treeView1.Location = PointToClient(PointToScreen(new System.Drawing.Point(button2.Location.X, button2.Location.Y + button2.Height))); this.treeView1.BringToFront(); treeView1.Visible = true; treeView1.Select(); } } private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { ToggleTreeView(); } #endregion #region ListView private void button1_Click(object sender, EventArgs e) { ToggleListView(); } private void ToggleListView() { if (listView1.Visible) { Controls.Remove(listView1); listView1.Visible = false; } else { Controls.Add(listView1); listView1.Size = new Size(300, 400); listView1.Location = PointToClient(PointToScreen(new System.Drawing.Point(button1.Location.X, button1.Location.Y + button1.Height))); this.listView1.BringToFront(); listView1.Visible = true; listView1.Select(); } } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { if (listView1.Visible) ToggleListView(); } #endregion } }

    Read the article

  • How to remove iso 9660 from USB?

    - by a_m0d
    I have somehow managed to write an iso 9660 image onto my USB drive, which makes all my computer think that the device is actually a CD. I have tried various methods of removing this partition, but nothing seems to work. I have tried fdisk, which says $ fdisk -l /dev/sdb Cannot open /dev/sdb parted crashes when I try to use it on this device. I have even tried $ dd if=/dev/zero of=/dev/sdb but it just hangs with no output (either on screen or on disk). However, when I plug the USB in, it does mount, and I can view (but not edit) the files on it. edit: now the result is $ dd if=/dev/zero of=/dev/sdb dd: opening `/dev/sdb': Read-only file system I have also tried re-formatting it on Windows, but it gets to the end of the format process and then says "Couldn't format the drive". How can I remove this partition and get my whole USB drive back to normal again? EDIT 1: Trying a simple mkfs doesn't work: $ sudo mkfs -t vfat /dev/sdb mkfs.vfat 3.0.0 (28 Sep 2008) mkfs.vfat: Will not try to make filesystem on full-disk device '/dev/sdb' (use -I if wanted) I can't do mkfs on /dev/sdb1 because there is no such partition, as shown:$ ls /dev | grep sdb sdb EDIT 2: This is the information posted by dmesg when I plug the device in:$ dmesg . . (snip) . usb 2-1: New USB device found, idVendor=058f, idProduct=6387 usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-1: Product: Mass Storage usb 2-1: Manufacturer: Generic usb 2-1: SerialNumber: G0905000000000010885 usb-storage: device found at 4 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access FLASH Drive AU_USB20 8.07 PQ: 0 ANSI: 2 sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sdb: unknown partition table sd 6:0:0:0: [sdb] Attached SCSI removable disk sd 6:0:0:0: Attached scsi generic sg2 type 0 ISO 9660 Extensions: Microsoft Joliet Level 3 ISO 9660 Extensions: RRIP_1991A SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts CE: hpet increasing min_delta_ns to 15000 nsec This shows that the device is formatted as ISO 9660 and that it is /dev/sdb. EDIT 3: This is the message that I find at the bottom of dmesg after running cfdisk and writing a new partition table to the disk:SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts sd 17:0:0:0: [sdb] Device not ready: Sense Key : Not Ready [current] sd 17:0:0:0: [sdb] Device not ready: < ASC=0xff ASCQ=0xffASC=0xff < ASCQ=0xff end_request: I/O error, dev sdb, sector 0 Buffer I/O error on device sdb, logical block 0 lost page write due to I/O error on sdb

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • How can I repair my USB drive?

    - by yurko
    USB drive is in read only state and I can't repair it. First of all I tried erase it using dd: root@yurko-laptop:/home/yurko-laptop# ls -l /dev/disk/by-id | grep usb lrwxrwxrwx 1 root root 9 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0-part1 -> ../../sdb1 root@yurko-laptop:/home/yurko-laptop# dd if=/dev/zero of=/dev/sdb dd: ?????? ? «/dev/sdb»: ?? ?????????? ????????? ????? 8257537+0 ??????? ??????? 8257536+0 ??????? ???????? ??????????? 4227858432 ????? (4,2 GB), 942,633 c, 4,5 MB/c After that I wanted to create new filesystem using fdisk: root@yurko-laptop:/home/yurko-laptop# fdisk /dev/sdb You will not be able to write the partition table. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 4227 MB, 4227858432 bytes 4 heads, 63 sectors/track, 32768 cylinders Units = cylinders of 252 * 512 = 129024 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 18 32768 4126596 b W95 FAT32 Command (m for help): fdisk showed that the partition still exists and I can't write the partition table. I tried to delete the existing partition: Command (m for help): d Selected partition 1 Command (m for help): w Unable to write /dev/sdb root@yurko-laptop:/home/yurko-laptop# Why am I not be able to write the partition table? Does it mean that some hardware failure occurred? And is it possible to repair the current USB drive? I've tried to use hdparm and it showed that the readonly flag is on: root@yurko-laptop:/home/yurko-laptop# hdparm /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 26 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 multcount = 0 (off) readonly = 1 (on) readahead = 256 (on) geometry = 1016/131/62, sectors = 8257536, start = 0

    Read the article

  • Download - Upload is too slow on Centos

    - by Mehdi
    My download/upload in server and out of server is too slow (around 50 KB/s !) ! Did I miss some configuration ? Some information: CentOS release 6.3 uptime load average: 0.17, 0.32, 0.37 Memory free -m total used free shared buffers cached Mem: 24009 21988 2021 0 806 18098 -/+ buffers/cache: 3083 20926 Swap: 4095 28 4067 lshw -C network *-network description: Ethernet interface product: 82574L Gigabit Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 00 serial: 00:25:90:70:17:4a size: 100MB/s capacity: 1GB/s width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=off broadcast=yes driver=e1000e driverversion=1.9.5-k duplex=full firmware=2.1-2 ip=108.175.8.123 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:16 memory:fb900000-fb91ffff ioport:e000(size=32) memory:fb920000-fb923fff ethtool ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off MDI-X: off Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000001 (1) Link detected: yes dmesg |grep e1000e dmesg |grep e1000e e1000e: Intel(R) PRO/1000 Network Driver - 1.9.5-k e1000e: Copyright(c) 1999 - 2012 Intel Corporation. e1000e 0000:02:00.0: Disabling ASPM L0s e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 e1000e 0000:02:00.0: setting latency timer to 64 e1000e 0000:02:00.0: irq 33 for MSI/MSI-X e1000e 0000:02:00.0: irq 34 for MSI/MSI-X e1000e 0000:02:00.0: irq 35 for MSI/MSI-X e1000e 0000:02:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:25:90:70:17:4a e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: Unsupported Speed/Duplex configuration e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: Disabling ASPM L1 e1000e 0000:02:00.0: eth0: changing MTU from 1500 to 9000 e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO

    Read the article

  • Areca 1280ml RAID6 volume set failed

    - by Richard
    Today we hit some kind of worst case scenario and are open to any kind of good ideas. Here is our problem: We are using several dedicated storage servers to host our virtual machines. Before I continue, here are the specs: Dedicated Server Machine Areca 1280ml RAID controller, Firmware 1.49 12x Samsung 1TB HDDs We configured one RAID6-set with 10 discs that contains one logical volume. We have two hot spares in the system. Today one HDD failed. This happens from time to time, so we replaced it. Upon rebuilding a second disc failed. Normally this is no fun. We stopped heavy IO-operations to ensure a stable RAID rebuild. Sadly the hot-spare disc failed while rebuilding and the whole thing stopped. Now we have the following situation: The controller says that the raid set is rebuilding The controller says that the volume failed It is a RAID 6 system and two discs failed, so the data has to be intact, but we cannot bring the volume online again to access the data. While searching we found the following leads. I don't know whether they are good or bad: Mirroring all the discs to a second set of drives. So we would have the possibility to try different things without loosing more than we already have. Trying to rebuild the array in R-Studio. But we have no real experience with the software. Pulling all drives, rebooting the system, changing into the areca controller bios, reinserting the HDDs one-by-one. Some people are saying that the brought the system online by this. Some are saying that the effect is zero. Some say, that they blew the whole thing. Using undocumented areca commands like "rescue" or "LeVel2ReScUe". Contacting a computer forensics service. But whoa... primary estimates by phone exceeded 20.000€. That's why we would kindly ask for help. Maybe we are missing the obvious? And yes of course, we have backups. But some systems lost one week of data, thats why we'd like to get the system up and running again. Any help, suggestions and questions are more than welcome.

    Read the article

  • How to create an EFI System Partition?

    - by Alex Popov
    TL; DR How do I create an EFI system partition from scratch? How do I put the EFI firmware on it onces it is created? Long version I hava Toshiba T430 laptop. I received it with Windows 7 installed (but I think originally it has shipped with Windows 8). I installed Ubuntu on it, but deleted some partitions on the disk so that I ended up wiping out the Windows and only having Ubuntu. Among the deleted partitions was the EFI System partition. I discovered that Ubuntu now boots in Legacy mode (and not UEFI). I am trying to follow this guide on converting my Ubuntu installation from Legacy to UEFI. The problem - since there is no EFI partition whenever I choose from BIOS to boot using UEFI I cannot boot. That counts not only for the harddrive, but usb and DVD as well. I think this is logical - it expects an EFI partition and since it can't find it, it cannot continue booting futher, be it from HDD or DVD. So how do I recreate the EFI partition? The guide above says: Creating an EFI partition If you are manually partitioning your disk in the Ubuntu installer, you need to make sure you have an EFI partition set up. If your disk already contains an EFI partition (eg if your computer had Windows8 preinstalled), it can be used for Ubuntu too. Do not format it. It is strongly recommended to have only 1 EFI partition per disk. An EFI partition can be created via a recent version of GParted (the Gparted version included in the 12.04 disk is OK), and must have the following attributes: Mount point: /boot/efi (remark: no need to set this mount point when using the manual partitioning, the Ubuntu installer will detect it automatically) Size: minimum 100Mib. 200MiB recommended. Type: FAT32 Other: needs a "boot" flag. I had some trouble creating this partition: I boot from a live Ubuntu DVD, open GParted, create a 200MB partition and format it to FAT32. In GParted I cannot set the mount point and thus cannot set the bootflag. I didn't set the mount point in /etc/fstab since it's a live CD and fstab looked quite differently from what I expected compared to a normal boot. Anyway, I just didn't know what values to set. I booted again via the live DVD and then chose to install Ubuntu. I then created a partition with the mentioned criteria - mount point, 200MB, FAT32, boot flag. However, I continue to have this problem and I suppose it's because on that partition there is no EFI firmware, it's just an empty partition, which is suitable to have EFI firmware. So again, how do I create an EFI partition, which has the EFI software, so that the laptop can once again boot in UEFI mode?

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >