Search Results

Search found 20242 results on 810 pages for 'faith in unseen things'.

Page 17/810 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • What things must I know about OpenAL memory management?

    - by mystify
    I am playing sound with OpenAL, and it seems to increase memory footprint dramatically for every little sound I play. It seems that OpenAL never frees memory itself and that playing a Source causes memory footprint to grow. I couldn't find any good resources about OpenAL memory management, but I bet I must do a lot of stuff myself. Maybe someone knows a ressource for that?

    Read the article

  • How to map combinations of things to a relational database?

    - by Space_C0wb0y
    I have a table whose records represent certain objects. For the sake of simplicity I am going to assume that the table only has one row, and that is the unique ObjectId. Now I need a way to store combinations of objects from that table. The combinations have to be unique, but can be of arbitrary length. For example, if I have the ObjectIds 1,2,3,4 I want to store the following combinations: {1,2}, {1,3,4}, {2,4}, {1,2,3,4} The ordering is not necessary. My current implementation is to have a table Combinations that maps ObjectIds to CombinationIds. So every combination receives a unique Id: ObjectId | CombinationId ------------------------ 1 | 1 2 | 1 1 | 2 3 | 2 4 | 2 This is the mapping for the first two combinations of the example above. The problem is, that the query for finding the CombinationId of a specific Combination seems to be very complex. The two main usage scenarios for this table will be to iterate over all combinations, and the retrieve a specific combination. The table will be created once and never be updated. I am using SQLite through JDBC. Is there any simpler way or a best practice to implement such a mapping?

    Read the article

  • Things to take care while sending the username and password using Ajax request ?

    - by Anil Namde
    Normally we have login page with username, password filed and signin button. Using the normal form based approach we send the data on and page gets post back. Now i would like to change this using the Ajax. As Ajax too using the normal HTTP so i guess nothing in different from the existing workflow barring postback right ? Now my question is is what all securities issues can be there? and what care should be taken for doing so ? Should we encode data using say base64 or so before sending using Ajax?

    Read the article

  • All things equal what is the fastest way to output data to disk in C++?

    - by user260197
    I am running simulation code that is largely bound by CPU speed. I am not interested in pushing data in/out to a user interface, simply saving it to disk as it is computed. What would be the fastest solution that would reduce overhead? iostreams? printf? I have previously read that printf is faster. Will this depend on my code and is it impossible to get an answer without profiling? Edit: Output data needs to be in text format, whether tab or comma separated. This will require formatting, precision, etc. Running in Windows.

    Read the article

  • What are all the concurrent things [data structure, algorithm, locking mechanism] missing in .Net 3.

    - by user49767
    First time I am bit disappointed in StackOverflow cause my http://stackoverflow.com/questions/2571727/c-concurrency-vs-java-concurrency-which-is-neatly-designed-which-is-better question was closed. My intension was just trying to gather knowledge from programming guru's who worked in both the programming technologies. Rather closing this question, please help me by discussing what is good, bad, and ugly in multi-threading part in both the platforms. It is also welcome, if someone would like to compare with .Net 4.0 with JDK 6 (or JDK 7)

    Read the article

  • Freelancer - client agreement. What things are worth to write explicitely?

    - by Dzida
    Hi guys, This is not technical question, though I think it is quite important for software developers who work as a freelancers. There is some general good advice to make paper contracts before starting new jobs. When I started my work as a freelancer I hadn't got any clue how such contract should looks like. Now I have some ideas but I believe many of freelancers gathered on SO can add some interesting advices for less-experienced colleagues (like me). So the question is: what clauses freelancers should put on agreements with their clients to make their projects less stressful and better secured. What are your experiences here? PS: Please write your country on the bottom of your post - I guess some stuff might be country specific. Probably there are differences in form of agreements depending on country where business is made.

    Read the article

  • Java Servlet framework that does things like rencoding images to preferred format etc.

    - by mP
    Are there any frameworks/libraries that provide servlets/filters etc that handle reencoding on the fly of images. interpret the accept headers and output the file, reencoding into the new format if necessary by checking the actual format of the original image file. provide a low and high quality version of an image. re encode an image into new dimensions. width and height parameters might query string parameters. I could create versions of the file in all the formats, at upload time but the seems overkill. I would rather lazily create the rencoded file and stick it in a cache if it gets served again etc.

    Read the article

  • How to test that variable is not equal to multiple things? Python

    - by M830078h
    This is the piece of code I have: choice = "" while choice != "1" and choice != "2" and choice != "3": choice = raw_input("pick 1, 2 or 3") if choice == "1": print "1 it is!" elif choice == "2": print "2 it is!" elif choice == "3": print "3 it is!" else: print "You should choose 1, 2 or 3" While it works, I feel that it's really clumsy, specifically the while clause. What if I have more acceptable choices? Is there a better way to make the clause?

    Read the article

  • In Cocoa (or maybe GUI development in general) how do you specify an arbitrary number of things tile

    - by RankWeis
    I'm new to creating GUI's, everything I've done up until this point is using the command line. I'm trying to create a port of minesweeper to the macintosh, as an experiment, and I've got the CLI working, but I'm running into walls everywhere with the gui. The first thing it seems I have to do, however, is be able to tile n x m 'boxes' for grid - and I'm not sure how to do that. The information is ready to be handed to it, but I don't know where to do it, or how. Also, if anyone has any recommendations for sites/Cocoa development books, feel free to drop them in here... Thanks!

    Read the article

  • How to efficiently render different things at different times in a game?

    - by Baqjohanson
    Sorry for the ambiguous title. What I am wondering is what is an efficient way to alternate rendering between lets say a main menu, options menu, and "in the game." The only two ways I've come up with so far are to have 1 render function, with code for each part (menu, ...) and a variable to control what gets drawn, or to have multiple render functions, and use a function pointer to point to the appropriate one, and then just call the function pointer. I always wonder how more professional games do it.

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • Mailman Error / Cpanel

    - by Faith In Unseen Things
    Mailman is giving off this error when any changes are made to the list: ======================= Bug in Mailman version 2.1.14 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. Ran: /usr/local/cpanel/bin/mailman-install --force Then it says at the end: Updating Usenet watermarks - nothing to update here Nothing to do. updating old qfiles cp: cannot remove `/usr/local/cpanel/img-sys/gnu-head-tiny.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman-large.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mm-icon.png': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/PythonPowered.png': Permission denied directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/en (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches/config (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sr (fixing) Warning: Private archive directory is other-executable (o+x). This could allow other users on your system to read private archives. If you're on a shared multiuser system, you should consult the installation manual on how to fix this. Problems found: 76 Re-run as mailman (or root) with -f flag to fix must be privileged to use -u must be privileged to use -u Unable to touch file /var/cpanel/mailman2: Permission denied at /usr/local/cpanel/bin/mailman-install line 275. [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity at /usr/local/cpanel/Cpanel/Rlimit.pm line 113 Cpanel::Rlimit::set_rlimit_to_infinity() called at /usr/local/cpanel/scripts/restartsrv_mailman line 18 eval {...} called at /usr/local/cpanel/scripts/restartsrv_mailman line 15 warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity [2012-04-13 19:33:55 -0500] warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied at /usr/local/cpanel/Cpanel/SafeDir/MK.pm line 153 Cpanel::SafeDir::MK::safemkdir('/var/run/restartsrv/startup', 0700) called at /usr/local/cpanel/Cpanel/RestartSrv.pm line 756 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup at /usr/local/cpanel/Cpanel/RestartSrv.pm line 759 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup Ran this: /usr/local/cpanel/bin/mailman-install --force -f Same message as above. root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/logs/error Apr 13 19:33:55 2012 mailmanctl(2255): PID unreadable in: /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid Apr 13 19:33:55 2012 mailmanctl(2255): [Errno 2] No such file or directory: '/usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid' Apr 13 19:33:55 2012 mailmanctl(2255): Is qrunner even running? root@server2 [~]# /scripts/restartsrv_mailman --status mailmanctl (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start) running as mailman with PID 24070 3rdparty/mailman/bin/qrunner (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s) running as mailman with PID 24078 root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid 24070 root@server2 [~]# ls -lah /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid -rw-rw---- 1 mailman mailman 6 Apr 13 19:47 /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid root@server2 [~]# ps aux | grep python mailman 4557 0.0 0.1 10484 6044 ? S 19:40 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s mailman 24070 0.0 0.1 14268 4480 ? Ss 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start mailman 24071 1.1 0.1 14052 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=ArchRunner:0:1 -s mailman 24072 1.0 0.1 13444 6112 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=BounceRunner:0:1 -s mailman 24073 1.0 0.1 13040 6108 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=CommandRunner:0:1 -s mailman 24074 1.0 0.1 13484 6104 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=IncomingRunner:0:1 -s mailman 24075 1.0 0.1 12940 6136 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=NewsRunner:0:1 -s mailman 24076 1.0 0.1 13700 6172 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=OutgoingRunner:0:1 -s mailman 24077 1.0 0.1 13416 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=VirginRunner:0:1 -s mailman 24078 0.9 0.1 13944 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s root 24177 0.0 0.0 5428 756 pts/0 S+ 19:48 0:00 grep python

    Read the article

  • Should I always encapsulate an internal data structure entirely?

    - by Prog
    Please consider this class: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThings(){ return things; } } This class exposes the array it uses to store data, to any client code interested. I did this in an app I'm working on. I had a ChordProgression class that stores a sequence of Chords (and does some other things). It had a Chord[] getChords() method that returned the array of chords. When the data structure had to change (from an array to an ArrayList), all client code broke. This made me think - maybe the following approach is better: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThing(int index){ return things[index]; } public int getDataSize(){ return things.length; } public void setThing(int index, Thing thing){ things[index] = thing; } } Instead of exposing the data structure itself, all of the operations offered by the data structure are now offered directly by the class enclosing it, using public methods that delegate to the data structure. When the data structure changes, only these methods have to change - but after they do, all client code still works. Note that collections more complex than arrays might require the enclosing class to implement even more than three methods just to access the internal data structure. Is this approach common? What do you think of this? What downsides does it have other? Is it reasonable to have the enclosing class implement at least three public methods just to delegate to the inner data structure?

    Read the article

  • I want to reformat my external hard disk but disk utility said it could not be unmounted

    - by Faith
    Hi I wanted to reformat my HP 500GB hard disk to MS DOS (FAT) by using Disk Utility but Disk Utility kept saying that the hard disk cannot be unmounted. I tried googling it but nothing seems to answer my question. I'm not trying to reformat the hard disk in my Mac but my external hard disk! I usually format my hard disks to clear the documents I no longer need. And I'll change it to ExFat but currently, whenever I plug in my hard disk on Windows, the whole system just hangs there. Can someone help??

    Read the article

  • Is Running programs by address common?

    - by dgood1
    I have read some of the things posted here and I keep reading about people running stuff like /foldername/executable -cmd NAME (was reading about a programmer using Eclipse, so he was testing something he made) I don't see things like that when I run things here (Ubuntu 12.04) because of the launcher and the Ubuntu button at the very top. That and Eclipse indigo has a button for running and testing things it makes. Just asking how and why it's common? (assuming it's the Terminal[Ctrl+alt+T] but I'm not sure)

    Read the article

  • Removing Duplicate Data From SQL Query Output For Display On A Web Page [migrated]

    - by doubleJ
    I had asked a similar question on stackoverflow but didn't really get anywhere. This page shows the output that I'm currently getting from my MSSQL server. I have a table of venue information (name, address, etc...) that our events happen on. Separately, I have a table of the actual events that are scheduled (an event may happen multiple times in one day and/or over multiple days). I join those tables with this query: <?php try { $dbh = new PDO("sqlsrv:Server=localhost;Database=Sermons", "", ""); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sql = "SELECT TOP (100) PERCENT dbo.TblSermon.Day, dbo.TblSermon.Date, dbo.TblSermon.Time, dbo.TblSermon.Speaker, dbo.TblSermon.Series, dbo.TblSermon.Sarasota, dbo.TblSermon.NonFlc, dbo.TblJoinSermonLocation.MeetingName, dbo.TblLocation.Location, dbo.TblLocation.Pastors, dbo.TblLocation.Address, dbo.TblLocation.City, dbo.TblLocation.State, dbo.TblLocation.Zip, dbo.TblLocation.Country, dbo.TblLocation.Phone, dbo.TblLocation.Email, dbo.TblLocation.WebAddress FROM dbo.TblLocation RIGHT OUTER JOIN dbo.TblJoinSermonLocation ON dbo.TblLocation.ID = dbo.TblJoinSermonLocation.Location RIGHT OUTER JOIN dbo.TblSermon ON dbo.TblJoinSermonLocation.Sermon = dbo.TblSermon.ID WHERE (dbo.TblSermon.Date >= { fn NOW() }) ORDER BY dbo.TblSermon.Date, dbo.TblSermon.Time"; $stmt = $dbh->prepare($sql); $stmt->execute(); $stmt->setFetchMode(PDO::FETCH_ASSOC); foreach ($stmt as $row) { echo "<pre>"; print_r($row); echo "</pre>"; } unset($row); $dbh = null; } catch(PDOException $e) { echo $e->getMessage(); } ?> So, as it loops through the query results, it creates an array for each record and ends up like this: Array ( [Day] => Tuesday [Date] => 2012-10-30 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => The Ark Church [Pastors] => Alan & Joy Clayton [Address] => 450 Humble Tank Rd. [City] => Conroe [State] => TX [Zip] => 77305.0 [Phone] => (936) 756-1988 [Email] => [email protected] [WebAddress] => http://www.thearkchurch.org ) Array ( [Day] => Wednesday [Date] => 2012-10-31 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => The Ark Church [Pastors] => Alan & Joy Clayton [Address] => 450 Humble Tank Rd. [City] => Conroe [State] => TX [Zip] => 77305.0 [Phone] => (936) 756-1988 [Email] => [email protected] [WebAddress] => http://www.thearkchurch.org ) Array ( [Day] => Tuesday [Date] => 2012-11-06 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => Fellowship Of Faith Christian Center [Pastors] => Michael & Joan Kalstrup [Address] => 18999 Hwy. 59 [City] => Oakland [State] => IA [Zip] => 51560.0 [Phone] => (712) 482-3455 [Email] => [email protected] [WebAddress] => http://www.fellowshipoffaith.cc ) Array ( [Day] => Wednesday [Date] => 2012-11-14 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => Faith Family Church [Pastors] => Michael & Barbara Cameneti [Address] => 8200 Freedom Ave NW [City] => Canton [State] => OH [Zip] => 44720.0 [Phone] => (330) 492-0925 [Email] => [WebAddress] => http://www.myfaithfamily.com ) As you can see, The Ark Church and its associated contact information is duplicated, so when I work with those arrays and output them to the page, I see a bunch of duplicate content. I'd like to remove the duplicate information so that I get results similar to this: The Ark Church Alan & Joy Clayton 450 Humble Tank Rd. Conroe, TX 77305 (936) 756-1988 [email protected] http://www.thearkchurch.org Meetings: Tuesday, 2012-10-30 07:00 PM Wednesday, 2012-10-31 07:00 PM Fellowship Of Faith Christian Center Michael & Joan Kalstrup 18999 Hwy. 59 Oakland, IA 51560 (712) 482-3455 [email protected] http://www.fellowshipoffaith.cc Meetings: Tuesday, 2012-11-06 07:00 PM Faith Family Church Michael & Barbara Cameneti 8200 Freedom Ave NW Canton, OH 44720 (330) 492-0925 http://www.myfaithfamily.com Meetings: Wednesday, 2012-11-14 07:00 PM It doesn't necessarily have to end up like that (I'm not looking for code specific for these results, but a concept of how to not show the duplicated information). I'm assuming that an additional foreach or while will do it, but I haven't figured out any logic that says <?php if ($location == $previouslocation) echo ""; ?>.

    Read the article

  • How to safely copy an object?

    - by Prog
    This question is going to be a little long. Please bear with me. Something that happened in a project of mine made me think about how to safely copy objects. I'll present the situation I had and then ask a question. There was a class SomeClass: class SomeClass{ Thing[] things; public SomeClass(Thing[] things){ this.things = things; } // irrelevant stuff omitted public SomeClass copy(){ return new SomeClass(things); } } There was another class Processor that takes SomeClass objects, copies them (via someClassInstance.copy()), manipulates the copy's state, and returns the copy. Here it is: class Processor{ public SomeClass processObject(SomeClass object){ SomeClass copy = object.copy(); manipulateTheCopy(copy); return copy; } // irrelevant stuff omitted } I ran this, and it had bugs. I looked into these bugs, and it turned out that the manipulations Processor does on copy actually affect not only the copy, but also the original SomeClass object that was passed into processObject. I found out that it was because the original and the copy shared state - because the original passed it's field things into the copy when creating it. This made me realize that copying objects is harder than simply instantiating them with the same fields as the original. For the two objects to be completely disconnected, without any shared state, each of the fields passed to the copy also has to be copied. And if that object contains other objects - they have to be copied too. And so on. So basically, in order to be able to actually copy an object, each class in the system must have a copy() method, that also invokes copy() on all of it's fields, and so on. So for example, for copy() in SomeClass to work, it needs to look like this: public SomeClass copy(){ Thing[] copyThings = new Thing[things.length]; for(int i=0; i<things.length; i++) copyThings[i] = things[i].copy(); return new SomeClass(copyThings); } And if Thing has object fields of it's own, than it's own copy() method must be appropriate: class Thing{ Apple apple; Pencil pencil; int number; public Thing(Apple apple, Pencil pencil, int number){ this.apple = apple; this.pencil = pencil; this.number = number; } public Thing copy(){ // 'number' is a primitve. return new Thing(apple.getCopy(), pencil.getCopy(), number); } } And so on. Of course, instead of all classes having a copy() method, the copying mechanism can happen in all of the getters and the constructors of classes (unless places where it isn't suitable, for example when the field points to an external object, not to an object that 'is part' of this object). Still, that means that in order to be able to safely copy an object - most classes would have to have copying mechanisms in their getters. My question is divided into two parts: How frequently do you need to get a copy of an object? Is this a regular issue? Is the technique described common and/or reasonable? Or is there a better way to make safe copies of objects? Or is there an easier way to safely copy objects, without them sharing any state?

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >