Search Results

Search found 10355 results on 415 pages for 'shell extension'.

Page 330/415 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • ArchBeat Link-o-Rama for November 16, 2012

    - by Bob Rhubart
    X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Leveraging Oracle Scorecard and Strategy Management for Everyday BI Needs "Oracle Scorecard and Strategy Management (OSSM) is built-upon the premise that a scorecard system should not be separate from the BI system, like many comparable tools are today," says author Kevin McGinely. "Instead of a separate application with its own data, its own data definitions, and its own front-end, Oracle made the choice to integrate OSSM directly into OBIEE." Applying BI for personal productivity recognition and gamification | Capgemini Oracle Blog "It is quite obvious that if you want people to participate you need an appealing and intuitive user interface," says Capgemini's Henk Vermeulen in this interesting exploration of gamification in the enterprise. Build and release OSB projects with Maven | Edwin Biemond "With Maven we are able to build and deploy OSB projects," says Oracle ACE Edwin Biemond. "The artifacts generated by Maven called snaphosts and releases can be automatically uploaded to a software repository. These versioned OSB jars can then be downloaded by the OSB Servers and deployed." Biemond shows you how in this detailed technical post. ADF Generator for Dynamic ADF BC and ADF UI | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is an extension of his OOW12 presentation, "Oracle ADF Implementations Around the Globe: Best Practices," and includes the sample application he promised to share. Service-oriented organizations have a head start in the cloud race | ZDNet ZDNet SOA blogger Joe McKendrick offers a snapshot of a recent report Forrester analyst James Staten. Oracle Fusion Middleware Security: X509 Fallback to Form | Debasish BhattacharyaOracle Fusion Middleware A-Team architect Debasish Bhattacharya shares a solution that resulted from brainstorming with colleagues Chris Johnson and Brian Eidelman. "The solution is not very difficult," says Bhattacharya, "though it needs some additional configurations and coding." It's all presented in this detailed post. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. Thought for the Day "Operating systems are like underwear — nobody really wants to look at them." — Bill Joy Source: SoftwareQuotes.com

    Read the article

  • What is the rationale behind snazzy Window Managers/Composers?

    - by Emanuele
    This is more of a generic question, based on trying out Window Managers like Awesome, Mate and others. To me looks like that other Window Managers like Gnome3 and/or Unity are heavy and pointless. I do understand that having all the composed UIs is more pleasant for the eye, but apart that, what are the other major benefits? To make an example, when I run the game Heroes of Newerth (using nVidia drivers) under: Unity : the FPS drops sharply Gnome3 : FPS is ok, but X and other processes use 15~20% of CPU and quite some additional memory Awesome : FPS is ok, and other processes use very little memory and CPU Below some numbers regarding what I'm saying (please note my system is 64 bit, AMD Phenom II X4, 8 GB RAM, nd nVidia 470 GTX, SSD disk). All data is sorted by mem usage (watch -d -n 10 "ps -e -o pcpu,pmem,pid,user,cmd --sort=-pmem | head -20"); again note that CPU time of ./hon-x86_64 might be different due to the fact I can't take the snapshot of the system during exactly same time. Awesome: %CPU %MEM PID USER CMD 91.8 21.6 3579 ema ./hon-x86_64 2.4 0.9 3223 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.6 0.4 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinp 0.3 0.2 3602 ema gnome-terminal 0.0 0.2 2698 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Gnome3: %CPU %MEM PID USER CMD 82.7 21.0 5528 ema ./hon-x86_64 17.7 1.7 5315 ema /usr/bin/gnome-shell 5.8 1.2 5062 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.0 0.4 5657 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.7 0.3 5331 ema nautilus -n 1.6 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- - 0.9 0.2 5451 ema gnome-terminal 0.1 0.2 5400 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Unity 3D: %CPU %MEM PID USER CMD 87.2 21.1 6554 ema ./hon-x86_64 10.7 2.6 6105 ema compiz 17.8 1.1 5842 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.3 0.9 6672 root /usr/bin/python /usr/sbin/aptd 0.4 0.4 6606 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.5 0.3 6115 ema nautilus -n 1.5 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinput -sasl errl 0.3 0.2 6180 ema /usr/lib/unity/unity-panel-service So my point is, what's the rationale behind going towards such heavy WMs/Composers?

    Read the article

  • Avoiding Duplicate Content Penalties on a Corporate/Franchise website

    - by heath
    My question is really an extension of a previous question that was ported from stackoverflow and closed so I cannot edit it. The basic gist is a regional franchise company has decided to force all independent stores into one website look; they currently all have their own domains and completely different websites. After reading the helpful answers and looking over some links provided, I think my solution is to put a 301 on each franchise store site (acme-store1.com, acme-store2.com, etc) back to the main corporate site (acme.com). All of the company history, product info, etc (about 90% of the entire site) applies to all stores. However, each store should have some exclusive content such as staff, location pictures, exclusive events and promotions, etc. I originally thought that I would simply do something like acme.com/store1/staff, acme.com/store2/staff, etc for the store exclusive content and then acme.com/our-company, for example, would cover all stores. However, I now see two issues that I don't know how to solve. They want to see site stats based on what store site they came from. If a user comes from acme-store1.com, is redirected to acme.com and hits several pages, don't I need to somehow keep that original site in the new url to track each page in that user's session and show they originally came from acme-store1.com? Each store is still independently owned and is essentially still in competition with the other stores, albeit, in less competition than they are with other brands. This is important because each store would like THEIR contact info, links to their social media pages, their mailing list sign-up and customer requests on EVERY page. So if a user originally goes to acme-store1.com and is redirected to acme.com, it still should look to the user that it's all about store 1, even though 90% of the content will be exactly the same as it is in the store 2, store 3 and corporate site. For example, acme.com/our-company would have the same company history, same header/footer/navigation, BUT depending on the original site the user came from, it would display contact and links to THAT store. If someone came directly to the corporate site, it would display their contact and links (they have their own as well). I was considering that all redirects would be to store1.acme.com, store2.acme.com, etc (or acme.com/store1) and then I can dynamically add the contact info and appropriate links based on the subdomain or subfolder. But, then I have to worry about duplicate content penalties because, again, about 90% of the text in these "subdomains" are all the same. For reference, this is a PHP5 site. I've already written a compact framework utilizing templates and mod-rewrite that I've used for other sites. Is this an easy fix that I'm just not grasping? Any suggestions?

    Read the article

  • SOA Community Newsletter September 2012

    - by JuergenKress
    Dear SOA partner community member Are you ready for Oracle Open World 2012? If you are planning to attend, make sure that you prepare your trip to San Francisco. If you could not make it, watch the keynotes live on-demand. You can also plan and decide to visit the SOA, Cloud and Service Technology Symposium 2012 and meet Tim Hall and Demed Lher from our product management team in London. As an Oracle partner you will get 50% discount on the conference pass, please use the code DJMXZ370 and avail your discount. The BPM Solution Catalogue is now live, make sure you use the process examples and contribute your processes. SOA Proactive support is the best resource to support your SOA implementations. To administrate your SOA systems Enterprise Manager Cloud Control 12c is the best tool, you can now attend thefree on-demand training. EM12c, Real User Experience Insight 12R1 gives you all the details, checkout our new demo. The BPM11g demo for Oracle E-Business Suite has become available. A wonderful SOA demo case is the Fusion Order Demo, Antony Reynolds posted an article how to update it on SOA Suite PS5. If you do use Coherence e.g. for SOA Suite, checkout the extension from our partner CloudTran. In this edition to this you will also find articles from: Automatically Disable Proxy Service to avoid overloading OSB By Jian Liang & Storing SCA Metadata in the Oracle Metadata Services Repository by Nicolás Fonnegra Martinez and Markus Lohn & Exploring MDS Explorer by Mark Nelson & Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL by Rajesh Raheja & Oracle Service Bus duplicate message check using Coherence by Jan van Zoggel & Installing Oracle SOA Suite10g on Oracle Enterprise Linux Lonneke Dikmans & Generating an EJB SDO Service Interface for Oracle SOA Suite by Edwin Biemond. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsSeptember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar’s Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a ‘perfect storm’ of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one’s work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven’t far more qualified and established commentators, MVPs and so on, already said it? While it’s a good idea to pick reasonably fresh and interesting topics, it’s more important not to let such fears lead to writer’s block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn’t always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what’s involved in getting an application that is ‘done’ ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we’d love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to ‘throw your hat into the ring’, drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • Setting up port forwarding for 7000 appliance VM in VirtualBox

    - by uejio
    I've been using the 7000 appliance VM for a lot of testing lately and relied on others to set up the networking for the VM for me, but finally, I decided to take the dive and do it myself.  After some experimenting, I came up with a very brief number of steps to do this all using the VirtualBox CLI instead of the GUI. First download the VM image and unpack it somewhere.  I put it in /var/tmp. Then, set your VBOX_USER_HOME to some place with lots of disk space and import the VM: export VBOX_USER_HOME=/var/tmp/MyVirtualBoxVBoxManage import /var/tmp/simulator/vbox-2011.1.0.0.1.1.8/Sun\ ZFS\ Storage\ 7000.ovf (go get a cup of tea...) Then, set up port forwarding of the VM appliance BUI and shell:First set up port as NAT:VBoxManage modifyvm Sun_ZFS_Storage_7000 --nic1 nat Then set up rules for port forwarding (pick some unused port numbers):VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestssh,tcp,,4622,,22"VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestbui,tcp,,46215,,215" Verify the settings using:VBoxManage showvminfo Sun_ZFS_Storage_7000 | grep -i nic Start the appliance:$ VBoxHeadless --startvm Sun_ZFS_Storage_7000 & Connect to it using your favorite RDP client.  I use a Sun Ray, so I use the Sun Ray Windows Connector client: $ /opt/SUNWuttsc/bin/uttsc -g 800x600 -P <portnumber> <your-hostname> & The portnumber is displayed in the output of the --startvm command.(This did not work after I updated to VirtualBox 4.1.12, so maybe at this point, you need to use the VirtualBox GUI.) It takes a while to first bring up the VM, so please be patient. The longest time is in loading the smf service descriptions, but fortunately, that only needs to be done the first time the VM boots.  There is also a delay in just booting the appliance, so give it some time. Be sure to set the NIC rule on only one port and not all ports otherwise there will be a conflict in ports and it won't work. After going through the initial configuration screen, you can connect to it using ssh or your browser: ssh -p 45022 root@<your-host-name> https://<your-host-name>:45215 BTW, for the initial configuration, I only had to set the hostname and password.  The rest of the defaults were set by VirtualBox and seemed to work fine.

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • Some More New ADF Features in JDeveloper 11.1.2

    - by Steven Davelaar
    The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/> See section 3.5.2 in web user interface guide for more info. A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon). New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4. The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering. In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list. Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit. 

    Read the article

  • /etc/postfix/transport missing; what should it look like?

    - by Thufir
    I'm following the mailman guide but couldn't locate /etc/postfix/ so created it as follows: root@dur:~# root@dur:~# cat /etc/postfix/transport dur.bounceme.net mailman: root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo fqdn_test 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 02:05:15 dur postfix/smtpd[20326]: connect from localhost[127.0.0.1] Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "thufir@localhost" Aug 28 02:06:10 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "[email protected]" Aug 28 02:06:23 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:28 dur postfix/smtpd[20326]: disconnect from localhost[127.0.0.1] Aug 28 02:06:49 dur dovecot: pop3-login: Login: user=<thufir>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=20338, TLS Aug 28 02:06:49 dur dovecot: pop3(thufir): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 root@dur:~# The manual page is here.

    Read the article

  • Five Fake Sounds Engineered to Make Your Feel Better [Science]

    - by Jason Fitzpatrick
    As objects in our environment (like cars, ATMs, and phones) have grown lighter and quieter scientists have been carefully engineering their sounds so that they continue to sound like we expect them to. Read on to see how. At the design blog Humans Invent they share five interesting ways that the world around us is being engineered so it sounds the way we expect it to. They start with the example of the car door. Years ago cars were almost entirely steel, the doors were weighty, and when you slammed them it sounded like one big hunk of steel locking into another big hunk of steel (which, in fact, it was). Newer cars are lighter but people still crave that substantial clunk. Humans Invent highlights the effect of consumer desire: A car door is essentially a hollow shell with parts placed inside it. Without careful design the door frame amplifies the rattling of mechanisms inside. Car companies know that if buyers don’t get a satisfying thud when they close the door, it dents their confidence in the entire vehicle. To produce the ideal clunk, car doors are designed to minimise the amount of high frequencies produced (we associate them with fragility and weakness) and emphasise low, bass-heavy frequencies that suggest solidity. The effect is achieved in a range of different ways – car companies have piled up hundreds of patents on the subject – but usually involves some form of dampener fitted in the door cavity. Locking mechanisms are also tailored to produce the right sort of click and the way seals make contact is precisely controlled. On average it takes 1.8 seconds to close a car door but in that time you’re witnessing a strange kind of symphony composed by engineers and designers whose goal is to reassure you that its rock solid. They mention lock mechanisms, something you may never have thought about. A friend of mine had a Ford Focus some years ago and that particular model had electric locks that, instead of giving a satisfying thunk or solid click, made this horrible gates-of-the-prison-buzzing sound that was completely unnerving. Hit up the link below to see how sounds are engineered for car doors, electric motors, ATM machines, and more. 5 Fake Sounds Designed to Help Humans [Humans Invent via Boing Boing] How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • TechEd 2012: Windows 8 And Metro

    - by Tim Murphy
    Windows 8 is here (or at least very close) and that was the main feature of this morning’s key note.  Antoine LeBlond started off by apologizing to the IT professionals since he planned on showing code.  I’m not sure if IT Pros are that easily confused or why you would need such a disclaimer.  Developers do real work, IT Pros just play with toys (just kidding). The highlights of the Windows 8 keynote for me started with some of the UI design elements that I had not seen when I was shown one of the Build tablets.  Specifically I liked the AppBar features that we have become used to with Windows Phone and some of the gesture features.  Even though they have been available on other platforms before I think Microsoft really got them right. Two other great features of Windows 8 that they demonstrated were the Hyper-V capabilities and the ability to run Windows 8 anywhere from a USB key.  My jaw dropped through the floor seeing a feature rich OS boot off of a thumb drive. WOW!  I also can’t wait to get rid of dual booting just to run Hyper-V images when developing. The morning continued with a session on Metro XAML development with Tim Heuer.  While included a lot of great XAML Metro demos, I was pleasantly surprised by some of the things I found out about Visual Studio 2012.  Finding out that Blend is now integrated with VS2012 was a nice addition after working with them as separate applications was an encouraging start. Moving on to Metro he introduced the nugget that WinRT is Async everywhere.  How deep this model goes will be an interesting thing to find out as I learn more about developing for the platform.  Thankfully he followed that up with a couple of new keywords, await and async, that eliminates a lot of plumbing that has been required in the past for asynchronous transactions. Tim also related that since the Metro framework is relatively small and most apps will use a significant amount of it the entire surface is referenced by default.  This is a contrast to adding namespace and assemblies one after another as we normally do. This was such a power packed session that I can’t detail it all here so here is the teaser list. New icons in VS2012 for extension methods Emulator/simulator testing features for gestures Portable class libraries XAML no longer managed code And so much more …   del.icio.us Tags: Windows 8,Metro,Tim Heuer,XAML,Widows Phone,Hyper-V,Antoine LeBlond,TechEd,TechEd 2012,Visual Studio 2012,Visual Studio

    Read the article

  • It's Here! Visual Studio 2010 and ASP.NET 4.0 Ship

    Today Microsoft released Visual Studio 2010 and ASP.NET 4.0. I've been using the RC version of Visual Studio 2010 quite a bit for the past couple of months and have really grown to like it. It has a host of features and enhancements that improve developer productivity, from improved IntelliSense to better multiple monitor support. Plus there's something about the user experience that, to me, makes it feel better than Visual Studio 2008. I don't know if it's the new blue color motif or what, but the IDE seems more modern looking and more responsive to my mouse movements and other input. Anyway, if you've not yet downloaded Visual Studio 2010 and ASP.NET 4.0, why not? As with previous versions of Visual Studio there's a free Express Edition and VS2010 and ASP.NET 4.0 runs side-by-side with earlier versions of Visual Studio and ASP.NET. And with Visual Studio 2010's multi-targeting you can even use VS2010 as your development editor for ASP.NET 2.0 and ASP.NET 3.5 web applications. (Although be forewarned if you have multiple developers working on the application that the project files in VS2010 and earlier versions of Visual Studio differ.) This week's article on 4Guys explores my favorite new features of Visual Studio 2010. Here's an excerpt: The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on. This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! And, in closing, here are some helpful VS2010 and ASP.NET 4.0 links: One click installation for ASP.NET 4.0, Visual Web Developer 2010, .NET Framework 4.0, and ASP.NET MVC 2 Eight Quick Hit videos showing some of the cool new VS2010 features VS2010 and ASP.NET 4.0 Release Announcement with some great info/links from none other than Scott Guthrie Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • How can I make a universal construction more efficient?

    - by VF1
    A "universal construction" is a wrapper class for a sequential object that enables it to be linearized (a strong consistency condition for concurrent objects). For instance, here's an adapted wait-free construction, in Java, from [1], which presumes the existence of a wait-free queue that satisfies the interface WFQ (which only requires one-time consensus between threads) and assumes a Sequential interface: public interface WFQ<T> // "FIFO" iteration { int enqueue(T t); // returns the sequence number of t Iterable<T> iterateUntil(int max); // iterates until sequence max } public interface Sequential { // Apply an invocation (method + arguments) // and get a response (return value + state) Response apply(Invocation i); } public interface Factory<T> { T generate(); } // generate new default object public interface Universal extends Sequential {} public class SlowUniversal implements Universal { Factory<? extends Sequential> generator; WFQ<Invocation> wfq = new WFQ<Invocation>(); Universal(Factory<? extends Sequential> g) { generator = g; } public Response apply(Invocation i) { int max = wfq.enqueue(i); Sequential s = generator.generate(); for(Invocation invoc : wfq.iterateUntil(max)) s.apply(invoc); return s.apply(i); } } This implementation isn't very satisfying, however, since it presumes determinism of a Sequential and is really slow. I attempted to add memory recycling: public interface WFQD<T> extends WFQ<T> { T dequeue(int n); } // dequeues only when n is the tail, else assists other threads public interface CopyableSequential extends Sequential { CopyableSequential copy(); } public class RecyclingUniversal implements Universal { WFQD<CopyableSequential> wfqd = new WFQD<CopyableSequential>(); Universal(CopyableSequential init) { wfqd.enqueue(init); } public Response apply(Invocation i) { int max = wfqd.enqueue(i); CopyableSequential cs = null; int ctr = max; for(CopyableSequential csq : wfq.iterateUntil(max)) if(--max == 0) cs = csq.copy(); wfqd.dequeue(max); return cs.apply(i); } } Here are my specific questions regarding the extension: Does my implementation create a linearizable multi-threaded version of a CopyableSequential? Is it possible extend memory recycling without extending the interface (perhaps my new methods trivialize the problem)? My implementation only reduces memory when a thread returns, so can this be strengthened? [1] provided an implementation for WFQ<T>, not WFQD<T> - one does exist, though, correct? [1] Herlihy and Shavit, The Art of Multiprocessor Programming.

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • What is 'Ubuntu Unity' (for the Desktop)?

    - by Martin
    Ok, so there's the buzz of Canonical (wanting to) switch for new Ubuntu version from the GNOME default desktop to their own Unity shell. (I hope that's accurate.) It seems I can not totally fathom what Unity actually is. For looking at its homepage it currently is firmly targeted at netbooks and the somehow different usage model on these. Is it a classical desktop? -- Taskbar? Shortcuts? Is the difference between Ubuntu(GNOME)+Unity more/less pronounced than the difference between Ubuntu and Kubuntu? Will "my parents" be able to get the interface if they've been using the classical gnome desktop so far? Edit: I would not like to split this up into more specific questions, as What is Unity? is exactly what the people I set up Ubuntu boxes for will ask me if they hear that the newer Ubuntu version is using that instead of the Desktop -- and it might well happen someone phrases it like that :-) I will certainly not give them the link to the HP as the explanation there does not lay out if it is a desktop or something more or something less: (It does not for me - therefore I'm asking here.) Unity is designed for netbooks and related touch-based devices. It includes [...] that makes it fast and easy to access [...] while removing screen elements that are rarely used in mobile and netbook computing. (emphasis mine) -- the explanation there doesn't even mention the desktop-PC! Unity has a vertical task management panel on the left-hand side and a menu panel at the top of the screen. [...] This sounds like a re-themed normal desktop. Clicking on an icon will give the target application focus if it is already running or launch it if it is not already running. If you click the ... Aha. Sounds like Windows 7. ... icon of an application that already has focus, Unity will activate an Expose-style view of all the open windows associated with that application. No clue what that's supposed to be. So it would really be nice if someone could explain for non desktop-design-terms experts what Unity is.

    Read the article

  • Unable to configure/setup 5.1 audio with 12.04

    - by Vipin Vinayan
    I am kinda new to Ubuntu as well. I have been having this issue with audio for quite sometime now. Initially, when I installed version 11.10 (I guess), I was able to use my 5.1 speakers without any issues. If my memory serves me right, it was after an update that the 5.1 audio stopped working and the video resolution would not get saved. I temporarily fixed the resolution issue by creating a start-up shell script that would update the resolution and load it. But the issue with audio has been going on for quite sometime now. Even though I have option for 5.1, only two speakers seem to be working. I thought an upgrade should fix the issue and so upgraded the OS to version 12.04. I also tried uninstalling alsa and pulse audio, reinstalling them, changing the /etc/pulse/daemon.conf channels from 2 to 6. I have also tried installing pavucontrol but nothing seems to have worked and the issue still persists. Is there anything else you could suggest? The lspci log on my computer is as follows 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA Controller [IDE mode] (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Would really appreciate a response that will assist me in resolving my issue. Thanks in advance Vipin

    Read the article

  • Getting Started Plugging into the "Find in Projects" Dialog

    - by Geertjan
    In case you missed it amidst all the code in yesterday's blog entry, the "Find in Projects" dialog is now pluggable. I think that's really cool. The code yesterday gives you a complete example, but let's break it down a bit and deconstruct down to a very simple hello world scenario. We'll end up with as many extra tabs in the "Find in Projects" dialog as we need, for example, three in this case:  And clicking on any of those extra tabs will, in this simple example, simply show us this: Once we have that, we'll be able to continue adding small bits of code over the next few blog entries until we have something more useful. So, in this blog entry, you'll literally be able to display "Hello World" within a new tab in the "Find in Projects" dialog: import javax.swing.JComponent; import javax.swing.JLabel; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.NotificationLineSupport; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider1 extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Demo Extension 1"; } public class ExampleSearchPresenter extends SearchProvider.Presenter { private ExampleSearchPresenter(ExampleSearchProvider1 sp) { super(sp, true); } @Override public JComponent getForm() { return new JLabel("Hello World"); } @Override public SearchComposition composeSearch() { return null; } @Override public boolean isUsable(NotificationLineSupport nls) { return true; } } } That's it, not much code, works fine in NetBeans IDE 7.2 Beta, and is easier to digest than the big chunk from yesterday. If you make three classes like the above in a NetBeans module, and you install it, you'll have three new tabs in the "Find in Projects" dialog. The only required dependencies are Dialogs API, Lookup API, and Search in Projects API. Read the javadoc linked above and then in next blog entries we'll continue to build out something like the sample you saw in yesterday's blog entry.

    Read the article

  • Disk drive for / not ready on boot after upgrade from 10.04 to 12.04

    - by Mathieu M-Gosselin
    After upgrading (using the Upgrade button from the update manager) from 10.04.4 to 12.04.1, I cannot boot anymore. Upon booting, I am greeted with the Ubuntu logo and the error "The disk drive for / is not ready yet or not present". I have the option to wait, to skip and to access a basic shell. Waiting overnight did nothing, skipping just gives me the same error for /tmp, /home, then for a UUID and finally it just goes to a black screen with a white "_" in the top left corner. My setup is a dual boot one with XP on a single hard drive, I use separate partitions for / and /home. Back in the day I installed 8.04 directly from the CD while leaving a partition for XP, which I installed after. This setup had never caused any such issues, even when upgrading from 8.04 to 10.04. I have done plenty of research regarding this issue, as many others seem to have had similar issues after doing the same upgrade as me. However, while for most what fixed the problem was running: apt-get -f install after remounting / in read-write, it didn't do it for me. I got dependency errors (see here), which I also investigated. I found https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/990740 where most people say the solution that worked is (prior to running the above command) running: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal but that also got me a lot of dependencies errors as output (see here), similar to #34 in the above thread. I also read that running: dpkg --configure -a could help, at first it wouldn't run because it had trouble parsing /var/lib/dpkg/status since there was an extra blank line in a package description (see https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/916799) but I removed it using vim (and then reran the command). It still gives me output that looks like an error, though. Here it is: http://paste.ubuntu.com/1338074/. I also tried re-running the above apt-get commands after that, to no avail. I'm running out of things to try in the hope of getting this fixed, your help would be very much appreciated! Thank you in advance.

    Read the article

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >