Search Results

Search found 23220 results on 929 pages for 'default constraint'.

Page 525/929 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • 12.04 LTS boot hangs at "SP5100 TCO timer: mmio address 0xfec000f0 already in use", didn't yesterday

    - by DarkIron112
    Dual-booting Windows 7 and Ubuntu 12.04 LTS. I went to reboot from Win to Ubu, and found a few interesting things. My POST screen is covered in blocks of epileptic colors until I hit GRUB, which continues when I try to boot into Ubuntu. These color blocks don't appear when I use my on-board VGA, so I'll just attribute to that. Grub dimensions are swapped (card vs onboard, probably), but, when interfacing with onboard VGA, the Grub Timeout Counter works and when using my card, it does not (see "[!!!]" below for more information) Booting into Ubuntu directly causes the error: SP5100 TCO timer: mmio address 0xfec000f0 already in use Booting into recovery mode, meanwhile, and then "resuming normal boot" gets me to the desktop without native 1440x900 resolution and graphic drivers can't tell the monitor it's looking at (I assume this is because it's not a full graphic boot, and as it says, some drivers won't run?) [!!!] When I reboot after going into recovery mode, the countdown timer works ONCE, puts me back into default ubuntu boot, and then does not work again until after another recovery-mode boot. Windows 7 can boot perfectly with no issues whatsoever from epilepsy color blocks or driver detection. This makes me wonder /why/ the POST screen can't handle my video card anymore. Amidst all the diagnostics, I opened my case and re-seated the videocard securely, ensuring it wasn't a loose connection-- But this did nothing to help me. Hardware I am running an NVidia GeForce GTX 8800 video card in a PCI slot. I have 4.8GiB memory, an AMD Athlon II Quad-core 640 Processor, on an MSI K9N6GM Series Mobo. Onboard video is an NVidia GeForce MCP61(V/S/P) card. Note: I did not have any of these problems yesterday, and I have been using Ubuntu intensively for a week, though it's been working flawlessly for months. I've recently been using it to mod my Android phone, perhaps I messed something up in the file system?

    Read the article

  • Are the Ubuntu ISO images updated From release .ubuntu.com

    - by tijybba
    Just got idea from this(may not be related though) question however. Are the ISO images from the official site updated with updates in Core Ubuntu system , like Kernel Updates , desktop Environment Updates(unity), i mean Updates of BASE system including X-org, Office suite, Package Manager , Update manager or Gnome Base Modules, those released in update Branches like precise-Updates branch. The reason i am asking this is , if i download the ISO image of Ubuntu 12.04 Say after two or three months from release , i have to do an update of approximately 200~300 MB's size. So why are these ISO images not updated to recent updates, i am aware that all of the components are not updated at the same time , but let's say after One month from actual release ( Both LTS and normal releases), the updated components can be added to form a Updated ISO in regular intervals, which provides new users to use latest versions and features with improved stability and less bandwidth Consumption. I am not mentioning the idea of comparison to Rolling Release , or External PPA's added updates , and neither Netinstal but the ISO of updated Packages .This can be provided as optional download. Since my question is within the boundary of Official updates releases so stability could not be the reason. I guess there are custom packagers out there , but having an official option would be better. It helps in distributing Newest ISO OS which impresses a lot new users , since it makes availability of newer features and a faster system ofcourse. Another reason of asking this is here. Edit: Since almost all new (Desktop) users download the Default ISO's having one or few issue , which may have been corrected in following updates. But most of new laptop users i encountered gave up because of it , so should i suggest ,for laptop not listed on Certified H/W list , to try daily Builds , if needed.

    Read the article

  • Efficient PuTTy workflow / configuration

    - by Adrian Ratnapala
    PuTTy is a fine SSH client, but how do you get a workflow managed as slickly as OpenSSH on Unix? My issues with PuTTy's management are: PuTTy tools are not in my PATH (easily fixable) PuTTy seems to have no equivalent of ~/.ssh, so I end have to manually choose locations for my keypairs, and then manually tell all the tools where to find them every time The private key's read permissions seem lax (I might be wrong about this, I a klutz on Windows). Pageant doesn't run by default (easily fixable?) Other programs don't reliably find pageant I suspect all of these problems can be fixed if I just get set my system up properly, and/or organise a nice workflow that fits into PuTTy's way of doing things. So can anyone share some success stories about managing PuTTy?

    Read the article

  • How do you manage large web farms?

    - by Andrew Katz
    I have a quickly growing web farm running IIS 7 (30+ servers). All servers are identical copies of each other and all servers are physical. We update the software about once a month, and in the current process, we follow the following steps: Disable server from pool on F5 load balancer. Disable HTTP Keep-alives in IIS so connections drop quickly. Change default directory of website to new folder containing new binaries. Test server Enable HTTP Keep-alives. Enable server in F5 pool. Move to server 2 Microsoft used to have Application Center which was abandoned a while ago. They have made a second attempt with the Web Farm Framework, but this adds as much QA time testing the release package as it saves in the deployment. Has anyone seen a commercial off the shelf application that is tailored for managing and deploying to large web farms? Thanks!

    Read the article

  • How do you set up DNS in Window Server 2008 in a Hyper-V environment?

    - by Nathan DeWitt
    I have a laptop running Server 2008 and Hyper-V. I have created a virtual machine that is also running Server 2008, that I used dcpromo to create as a domain controller. I disabled IPv6 because I had no idea how to enter a default address, and I just wanted to make a standalone MOSS dev environment. I have tried every combination of creating a virtual network on the host and then connecting to that in the VM, but I can't get the VM to communicate with the host and vice versa. No pinging, no copy and paste, nothing. Thanks. To update: My VM (which is its own DC) currently does not have a static IP. When I set the IP to static, I could not find anything that would let it talk to the host machine.

    Read the article

  • Sabnzbd Installed on Linux NAS

    - by Mike Szp.
    I installed SABnzbd on a linux formatted NAS. Now the directory it downloads to is mapped differently on the NAS itself, because the path that SABnzbd knows about starts in it's own folder. If this sounds confusing let me give you an example: \\MYNAS\Volume_1\ That is the path of the drive on the NAS. I would like my SABnzbd downloads to go to: \\MYNAS\Volume_1\Downloads Right now SABnzbd is installed to: \\MYNAS\Volume_1\ffp\opt\optware\share\SABnzbd And the default download directory (as indicated in SABnzbd is): /ffp/opt/optware/share/SABnzbd/downloads/complete I know that the mapping is different somehow because It is installed on the NAS, but I just am lost as to what I should do. So far, I have tried for the complete folder: /192.168.restofip/Volume_1/downloads/complete /Volumes/Volume_1/downloads/complete /Volume_1/downloads/complete Does anyone know how to change the path so that I can have it download to one of the topmost folders on the NAS instead of having it download to a folder so deep in the drive?

    Read the article

  • No virtual console on ubuntu 12.10

    - by Buzzzz
    When I try to do a ctr-alt f(1-6) in ubuntu 12.10 I only get a black screen with a blinking cursor but no login prompt. Any ideas on what could be wrong? It is a fresh install of 12.10 using a amd radeon 5850 graphics card. i have tried different things in my /etc/default/grub but at the moment I use the following: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vga=normal" #GRUB_CMDLINE_LINUX="vga=0x0376" #RUB_CMDLINE_LINUX_DEFAULT="vga=0x014c" #GRUB_CMDLINE_LINUX="vga=0x014c" #GRUB_GFXPAYLOAD_LINUX=1600x1200x24 # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console

    Read the article

  • Can't Start ISC DHCP IPv6 Server

    - by MrDaniel
    Trying to enable the ISC DHCP server for just IPv6 on Ubuntu 12.04 LTS. I have downloaded and installed the DHCP server via the following command: $ sudo apt-get install isc-dhcp-server Then I have followed the instructions in the following resources, Ubuntu Wiki DHCPv6, SixXS - Configuring ISC DHCPv6 Server and Linux IPv6 HOWTO - Configuration of the ISC DHCP server for IPv6 . So from review all those resources it seems like I need to: set a static IPv6 address for the Interface I want to run the DHCPv6 server from that is part of the IPv6 network subnet outside the DHCP range. Edit the /etc/dhcp/dhcpd6.conf file to configure the DHCPv6 range etc. Create the /var/lib/dhcp/dhcpd6.leases Manually start the DHCPv6 server. Setting the Static IP for eth0 $ sudo ifconfig eth0 inet6 add 2001:db8:0:1::128/64 My dhcpd6.conf default-lease-time 600; max-lease-time 7200; log-facility local7; subnet6 2001:db8:0:1::/64 { #Range for clients range6 2001:db8:0:1::129 2001:db8:0:1::254; } Created the dhcpd6.leases file As indicated in the dhcpd.leases man page. $ touch /var/lib/dhcp/dhcpd6.leases #Tried with sudo as well Manually starting the DHCPv6 server. Attempted to start the server using the following command: $ sudo dhcp -6 -f -cf /etc/dhcp/dhcpd6.conf eth0 The problem, the DHCP will not start, with an append error for the dhcpd6.leases file as indicated below when running the manual start command noted above. Can't open /var/lib/dhcp/dhcpd6.leases for append. Any ideas what I might be missing?

    Read the article

  • Can't connect to or see my wifi ssid

    - by ant
    Today I installed ubuntu 12.04 on my laptop. I am unable to see my home SSID or even connect to it. I've tried to connect as a hidden SSID but I always get prompted for authorization although my key is correct. I'm in in Europe but my laptop is from US. I'm not sure if that is relevant. I've read around this site and saw something that has to do with setting the channel above 11. I'm not sure I did that correctly I did this : How to use Wi-Fi channels above 11? Did't help. I'm able to connect with cable but not via wifi either windows or linux. Other devices in my home can connect without any issues, even the kindle. Here is the screenshot from my router : Here is some additional info : lspci | grep -i network 08:00.0 Network controller: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) (rev 01) lspci -nnk | grep -A2 0280 08:00.0 Network controller [0280]: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01) Subsystem: Hewlett-Packard Company U98Z062.10 802.11bgn Wireless Half-size Mini PCIe Card [103c:303f] Kernel driver in use: ath9k m-tool NetworkManager Tool State: connected (global) Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: disconnected Default: no HW Address: 90:4C:E5:38:79:0D Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes I'm not sure what to do next. Any suggestions?

    Read the article

  • Issue While uploading a image to share point 2010 picture library.

    - by Gino Abraham
    I was trying to upload a image to my picture library using sharepoint client object model. I Used the code from the below blog to upload a file to my picture library. http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx The image got uploaded sucessfully. But when we took the relative url to update in a different list, we were getting empty image symbol. After a lot of analysis we figured out that the issue was with the file we uploaded. An image file which is of jpeg quality was uploaded to an application with giff extension. Try this. Copy a JPG file from net and save it to your file system. Change the extension of the file from jpg to giff. When you change the file extension the image quality remains same but it will open in picture viewers. Upload the file to your picture library. Once uploaded you will get the file listed as thumbail in your picture library. Click on the thumbnail image it will open up a page showing a larger image with file details. Now click either on the image or the file name hyper link, it will open up an empty page with default no image symbol. I wasted a lot of time on this figuring out the issue, so thought of sharing here. Hope this helps some one.

    Read the article

  • Is it possible to transfer a domain without a "gap" in Whois privacy protection?

    - by Guest
    I currently own several domains on which I am using a Whois privacy protection service to hide my personal details. In the near future, I would like to transfer some of these domains to a different registrar. It has been many years since I last performed domain transfers, so I am no longer knowledgeable about what it involves. However, I have read from several registrars that they ask their customers to disable Whois protection before effecting a domain transfer. Since there are several websites out there that publish archived versions of Whois information (and ask handsome money for the information to be hidden, of course), I would prefer to avoid having such a "gap" in my privacy protection. I figured that these websites would fetch Whois information mainly when a query is effected through their own website. However, I have found out that at least one of these sites had a copy of the Whois information for a new domain up on their site within hours after I registered it, so they must have some other source (of course I used a Google search to find that out, not their own site). What that tells me is that the time it takes for the domain transfers to go through would be more than enough for these rogue websites to cache my information. If my new registrar offers privacy protection for domains right from the point of registration as well, is there no way to transfer the domain between the two without reverting to my default Whois information in between?

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    Possible Duplicate: What Forum Software should I use? I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • Linux Defualt Startup Display PCI to fix black boot screen

    - by Jonathan
    You heard it all before black screen on boot after perfectly fine install of most linux ubnuntu mint etc distributions (the netbook works fine) It has an Intel N10 integrated graphics chipset. I have actually found that if I plug in an external display then remove it the default screen turns on and my laptop works fine drivers all great - I have tried the screen cycle button fn f7 but doesn't work when no display is plugged in. It also works out all the correct resolutions and no modification of the grub bootloader or creating any xorg configs. So I think my monitor is forcing to a display that doesn't exist. Do you know if there is anyway I can force it to choose a different screen at boot so I can get a login screen? I can use nomodeset on grub but xrandr cant add the damned 1024 600 60 resolution that I need! Ideas guys?

    Read the article

  • Blender - creating bones from transform matrices

    - by user975135
    Notice: this is for the Blender 2.5/2.6 API. Back in the old days in the Blender 2.4 API, you could easily create a bone from a transform matrix in your 3d file as EditBones had an attribute named "matrix", which was an armature-space matrix you could access and modify. The new 2.5+ API still has the "matrix" attribute for EditBones, but for some unknown reason it is now read-only. So how to create EditBones from transform matrices? I could only find one thing: a new "transform()" function, which takes a Matrix too. Transform the the bones head, tail, roll and envelope (when the matrix has a scale component). Perfect, but you already need to have some values (loc/rot/scale) for your bone, otherwise transforming with a matrix like this will give you nothing, your bone will be a zero-sized bone which will be deleted by Blender. if you create default bone values first, like this: bone.tail = mathutils.Vector([0,1,0]) Then transform() will work on your bone and it might seem to create correct bones, but setting a tail position actually generates a matrix itself, use transform() and you don't get the matrix from your model file on your EditBone, but the multiplication of your matrix with the bone's existing one. This can be easily proven by comparing the matrices read from the file with EditBone.matrix. Again it might seem correct in Blender, but now export your model and you see your animations are messed up, as the bind pose rotations of the bones are wrong. I've tried to find an alternative way to assign the transformation matrix from my file to my EditBone with no luck.

    Read the article

  • Accessing internal server eg 192.168.10.10 without using remote desktop

    - by bergin
    Hi there My boss has an intranet he wants his employees to gain access to from the WWW. Theres a sharepoint server running on 192.168.10.10 and SBS can be seen from a website 81.244.232.22 (some numbers like this). When you access, theres a default internal sharepoint site "companyweb" but we dont want to use that we want the main sharepoint site which has all the business on it. is this possible? Currently we have to connect to a computer, chose the server and then get in that way. Any ideas?

    Read the article

  • Accessing internal server eg 192.168.10.10 without using remote desktop

    - by bergin
    Hi there My boss has an intranet he wants his employees to gain access to from the WWW. Theres a sharepoint server running on 192.168.10.10 and SBS can be seen from a website 81.244.232.22 (some numbers like this). When you access, theres a default internal sharepoint site "companyweb" but we dont want to use that we want the main sharepoint site which has all the business on it. is this possible? Currently we have to connect to a computer, chose the server and then get in that way. Any ideas?

    Read the article

  • approx via inetd is not open to connection for others machines

    - by Cédric Girard
    I have an approx server to speed up Debian apt updates, on my Ubuntu 11.04 desktop PC, it had ran fine in the past, but today le 9999 port is open from localhost, but not for others PC. I have not modified inetd configuration at all. What can I check and try? inetd.conf 9999 stream tcp nowait approx /usr/sbin/approx /usr/sbin/approx approx.com # Here are some examples of remote repository mappings. # See http://www.debian.org/mirror/list for mirror sites. debian http://ftp2.fr.debian.org/debian security http://security.debian.org/debian-security volatile http://volatile.debian.org/debian-volatile # The following are the default parameter values, so there is # no need to uncomment them unless you want a different value. # See approx.conf(5) for details. $cache /espace/Dossiers/approx $max_rate unlimited $max_redirects 5 $user approx $group approx $syslog daemon $pdiffs true $offline false $max_wait 10 $verbose false $debug false I tried to allow others PC to connect with a "ALL: ALL" in hosts.allow. ufw is disabled, iptables-save is empty.

    Read the article

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • How to fix poor rendering in Windows Movie Maker?

    - by Cyberherbalist
    I am using CamStudio to make some instructional videos about Visual Studio for our development team, and one of the videos needed some editing to remove audio and video mistakes and to add in a better ending. CamStudio outputs to .avi files, and they look pretty good, with the program source code being quite readable. However, after making the edits using WMM, it has rendered the results to a noticeable loss in quality. It has gone from from fairly sharp in focus and adequately readable to recognizable but somewhat blurred. It has also inflated the size of the new .avi file to three times its original size (before cropping half of the vido out!). And the .wmv I attempted to render was was certainly smaller but simply horrible-looking. The left image here is the original video, and the right image is WMM's rendering in .avi format: I have to be doing something wrong, but I know nothing about how to use WMM (my first use of it). I am using default settings to the best of my knowledge. Any suggestions welcomed!

    Read the article

  • Collapsing Bookmarks

    - by Tim Dexter
    I said I would tackle documenting some of the new features in the 10.1.3.4.1 roll up patch I mentioned last week. With the patch you can now set the default state of bookmarks (if you create them) in your PDF outputs. If your users prefer to see them all collapsed to the base level or may be collapsed to the second level to ease navigation; whatever they need. Its another opportunity for you to look like a star! You of course need to start with a table of contents; then add the convert|copy to bookmarks command. You can then add the new collapse command to set the appropriate level in the bookmarks. <?copy-to-bookmark:?> <?collapse-bookmark:show;2?> <<< Table of Contents >>> <?end convert-to-bookmark?> The command allows you to expand or collapse the bookmarks as you need. Of course you will know how many levels you will have in the final output document. The command takes the form: <?collapse-bookmark:show|hide;level int?> Some examples <?collapse-bookmark:hide;1?> <?collapse-bookmark:hide;2?> <?collapse-bookmark:hide;3?> Sample template and data here. Dont forget you need that 10.1.3.4.1 roll up!

    Read the article

  • What happens with the Guest OS's on ESXi in the event of a power failure?

    - by Jeremy Holovacs
    Many small businesses would prefer to let their server drop on power failure than to pay even $100 for a cheap UPS. It's often difficult to convince them of the value of something like that; it's why they like ESXi. It's free, they can save a lot of cash by putting a bunch of linux servers on one machine, and then I get paid. :) If the ESXi server experiences a power outage, it is set to come back on automatically when power is restored. What happens with the guest OS's? Ideally I would like them to all come online again as well, assuming they were on when power was lost, but I see no option for choosing this. I don't want to yank power to the system just to try it out, of course. I'm sure someone knows what happens by default, and perhaps how to make my system to work as I would wish.

    Read the article

  • Oracle Traffic Director – download and check out new cool features in 11.1.1.7.0 by Frances Zhao

    - by JuergenKress
    As Oracle's strategic layer-7 software load balancer product, Oracle Traffic Direct is fast, reliable, secure, easy-to-use and scalable; that you can deploy as the reliable entry point for all TCP, HTTP and HTTPS traffic to application servers and web servers in your network. The latest release Oracle Traffic Director 11.1.1.7.0 is available for ExaLogic and Database Appliance! For download and details please visit the Traffic Director OTN website. It this release, we have introduced some major new functionality and improvements. Web application firewall. Oracle Traffic Director supports web application firewalls. A web application firewall (WAF) is a filter or server plugin that applies a set of rules, called rule sets, to an HTTP request. Using a web application firewall, users can inspect traffic and deny requests to protect back-end applications from CSRF vulnerabilities and common attacks such as cross-site scripting. WebSocket Connections. Oracle Traffic Director handles WebSocket connections by default. WebSocket connections are long-lived and allow support for live content, games in real-time, video chatting, and so on. Support for LDAP/T3 Load Balancing. Oracle Traffic Director now supports basic LDAP/T3 load balancing at layer 7, where requests are handled as generic TCP connections for traffic tunneling. It works in full-NAT mode. Please download and try it out. For more information, check out the data sheet and the documentation. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: traffic director,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Showing protocol specific mini icons for Jabber/XMPP gateway contacts

    - by aef
    Since a short while I'm using Ubuntu Oneiric Ocelot (11.10) with gnome-shell (Gnome 3) and I'm trying to get accustomed to the default Empathy Instant Messaging client. I'm using a gateway service on the side of my Jabber/XMPP server to communicate with lots of contacts over proprietary networks like ICQ or MSN. So I don't use Empathy's native support for ICQ and MSN, and I don't want to change back to using such a thing for various reasons. One thing that annoys me is that Empathy does not make it clear to me that these contacts are from another instant messaging network. If I enable the View Show Protocols option they are all recognized as Jabber/XMPP contacts. Although I perfectly understand why that happens, I would like to be able to change this behavior to make Empathy mark these contacts correctly. Is there a configuration option or a plugin for this? Or may this feature still be in development and will be available later?

    Read the article

  • XNA CustomModelAnimationSample problem

    - by Mentoliptus
    I downloaded the official tutorial from:CustomModelAnimationSample It works fine but when I try to replicate it in my project, it fails to load the Tag property in my model. Is found that the probelm is in the line: skinnedModel = Content.Load<Model>("DudeWalk"); This line loads the model from the DudeWalk.fbx file and with the custom SkinnedModelProcessor. It loads the animations data in the model. After the line the Tag property is full. I stepped into the method and it went to the custom ModelData class. I copied everything from the projects CustomModelAnimationWindows and CustomModelAnimationPipeline to my solution and set all the references. I tried the same line of code and couldn't step in the method. It called the default method or model constructor and after the line the model's Tag propetry was null. I have to load the model through my custom SkinnedModelProcessor class, but how I tell the game to use this class? In the tutroail CustomModelClass the line is changed to: model = Content.Load<CustomModel>("tank"); So I assumed that I have to set the generic type to a custom model class, but the first example works without it. If anyone has some useful advice or some other helpful link, I'll be happy to try it.

    Read the article

  • Keeping Xv Overlay configuration throughout an X session.

    - by kriss
    After upgrading my Linux system from Ubuntu 9.04 to Ubuntu 10.10, I suceeded correcting most problems (all related to Intel 82865G Integrated Graphics Adapter support and compiz is still not working but that's another matter) but for one I only have a partial solution. Whenever I play a video the colors are much too saturated. This is really a problem for tones of skins that appears reddish (everyone seems to be coming back from a ski vacation with deep sun burns). As this effect only occurs with videos, not with pictures, I finally figured out it was related to Video Overlays configuration and I can correct it typing: xvattr -a XV_SATURATION -v 120 This change the default saturation value, which is 500 and much too high in my case, at eye sight the correct value seems to be between 100 and 150. Now my problem is that I have to type the above command each time I run a video. If I type it before running the video it has no effect, if I close the video and open a new one, I have to type it again, etc. I tried to put it in Xsession and (logically) it has no effect either. How could I do to get the correct setting whenever I run a video without typing the above command every time ?

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >