Daily Archives

Articles indexed Monday December 10 2012

Page 8/16 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Printing contents of a dynamically created iframe from parent window

    - by certainlyakey
    I have a page with a list of links and a div which serves as a placeholder for the pages the links lead to. Everytime user clicks on a link an iframe is created in the div and its src attribute gets the value of the link href attribute - simple and clear, sort of external web pages gallery. What i do have a problem with is printing the contents of the iframe. To do this i use this code: function PrintIframe() { frames["name"].focus(); frames["name"].print(); } The issue seems to be that iframe is created dynamically by JQuery - when I insert an iframe right into the html code, the browser prints the external page all right. But with the same code injected by JavaScript nothing happens. I tried to use JQ 1.3 'live' event on "Print" link, no success. Is there a way to overcome this?

    Read the article

  • AdvancedFormatProvider: Making string.format do more

    - by plblum
    When I have an integer that I want to format within the String.Format() and ToString(format) methods, I’m always forgetting the format symbol to use with it. That’s probably because its not very intuitive. Use {0:N0} if you want it with group (thousands) separators. text = String.Format("{0:N0}", 1000); // returns "1,000"   int value1 = 1000; text = value1.ToString("N0"); Use {0:D} or {0:G} if you want it without group separators. text = String.Format("{0:D}", 1000); // returns "1000"   int value2 = 1000; text2 = value2.ToString("D"); The {0:D} is especially confusing because Microsoft gives the token the name “Decimal”. I thought it reasonable to have a new format symbol for String.Format, "I" for integer, and the ability to tell it whether it shows the group separators. Along the same lines, why not expand the format symbols for currency ({0:C}) and percent ({0:P}) to let you omit the currency or percent symbol, omit the group separator, and even to drop the decimal part when the value is equal to the whole number? My solution is an open source project called AdvancedFormatProvider, a group of classes that provide the new format symbols, continue to support the rest of the native symbols and makes it easy to plug in additional format symbols. Please visit https://github.com/plblum/AdvancedFormatProvider to learn about it in detail and explore how its implemented. The rest of this post will explore some of the concepts it takes to expand String.Format() and ToString(format). AdvancedFormatProvider benefits: Supports {0:I} token for integers. It offers the {0:I-,} option to omit the group separator. Supports {0:C} token with several options. {0:C-$} omits the currency symbol. {0:C-,} omits group separators, and {0:C-0} hides the decimal part when the value would show “.00”. For example, 1000.0 becomes “$1000” while 1000.12 becomes “$1000.12”. Supports {0:P} token with several options. {0:P-%} omits the percent symbol. {0:P-,} omits group separators, and {0:P-0} hides the decimal part when the value would show “.00”. For example, 1 becomes “100 %” while 1.1223 becomes “112.23 %”. Provides a plug in framework that lets you create new formatters to handle specific format symbols. You register them globally so you can just pass the AdvancedFormatProvider object into String.Format and ToString(format) without having to figure out which plug ins to add. text = String.Format(AdvancedFormatProvider.Current, "{0:I}", 1000); // returns "1,000" text2 = String.Format(AdvancedFormatProvider.Current, "{0:I-,}", 1000); // returns "1000" text3 = String.Format(AdvancedFormatProvider.Current, "{0:C-$-,}", 1000.0); // returns "1000.00" The IFormatProvider parameter Microsoft has made String.Format() and ToString(format) format expandable. They each take an additional parameter that takes an object that implements System.IFormatProvider. This interface has a single member, the GetFormat() method, which returns an object that knows how to convert the format symbol and value into the desired string. There are already a number of web-based resources to teach you about IFormatProvider and the companion interface ICustomFormatter. I’ll defer to them if you want to dig more into the topic. The only thing I want to point out is what I think are implementation considerations. Why GetFormat() always tests for ICustomFormatter When you see examples of implementing IFormatProviders, the GetFormat() method always tests the parameter against the ICustomFormatter type. Why is that? public object GetFormat(Type formatType) { if (formatType == typeof(ICustomFormatter)) return this; else return null; } The value of formatType is already predetermined by the .net framework. String.Format() uses the StringBuilder.AppendFormat() method to parse the string, extracting the tokens and calling GetFormat() with the ICustomFormatter type. (The .net framework also calls GetFormat() with the types of System.Globalization.NumberFormatInfo and System.Globalization.DateTimeFormatInfo but these are exclusive to how the System.Globalization.CultureInfo class handles its implementation of IFormatProvider.) Your code replaces instead of expands I would have expected the caller to pass in the format string to GetFormat() to allow your code to determine if it handles the request. My vision would be to return null when the format string is not supported. The caller would iterate through IFormatProviders until it finds one that handles the format string. Unfortunatley that is not the case. The reason you write GetFormat() as above is because the caller is expecting an object that handles all formatting cases. You are effectively supposed to write enough code in your formatter to handle your new cases and call .net functions (like String.Format() and ToString(format)) to handle the original cases. Its not hard to support the native functions from within your ICustomFormatter.Format function. Just test the format string to see if it applies to you. If not, call String.Format() with a token using the format passed in. public string Format(string format, object arg, IFormatProvider formatProvider) { if (format.StartsWith("I")) { // handle "I" formatter } else return String.Format(formatProvider, "{0:" + format + "}", arg); } Formatters are only used by explicit request Each time you write a custom formatter (implementer of ICustomFormatter), it is not used unless you explicitly passed an IFormatProvider object that supports your formatter into String.Format() or ToString(). This has several disadvantages: Suppose you have several ICustomFormatters. In order to have all available to String.Format() and ToString(format), you have to merge their code and create an IFormatProvider to return an instance of your new class. You have to remember to utilize the IFormatProvider parameter. Its easy to overlook, especially when you have existing code that calls String.Format() without using it. Some APIs may call String.Format() themselves. If those APIs do not offer an IFormatProvider parameter, your ICustomFormatter will not be available to them. The AdvancedFormatProvider solves the first two of these problems by providing a plug-in architecture.

    Read the article

  • Best DSL hardware for ADSL Troubleshooting

    - by Jeff Sacksteder
    I have a situation where I need to make the best of a bad DSL situation. The CPE is a black box with no access to DSL diagnostics. My plan is to get some sort of DSL hardware that exposes link-layer state and gives me knobs to tweak. I'd like to be able to mitigate bufferbloat as much as I can while I'm at it. The obvious choice would seem to be a Sangoma card in a linux system. I have no way of knowing if that will do anything for me without testing it, however. I have no other access to WAN troubleshooting equipment. Are there any other options avail to me as a consumer?

    Read the article

  • Trailing dots in url result in empty 404 page on IIS

    - by Peter Hahndorf
    I have an ASP.NET site on IIS8, but IIS7.5 behaves exactly the same. When I enter a URL like: mysite.com/foo/bar.. I get the following error with a '500 Internal Server Error' status code: even though I have custom error pages set up for 500 and 404 and I don't see anything wrong with my custom error page. In my web.config system.web node I have the following: <customErrors mode="On"> <error statusCode="404" redirect="/404.aspx" /> </customErrors> If I remove that section, I get a 404.0 response back but the page itself is blank. In web.config system.webServer I have: <httpErrors errorMode="DetailedLocalOnly"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="404.html" responseMode="File" /> </httpErrors> But whether that is there or not, I get the same blank 404.0 page rather than my expected custom error page, or at least an internal IIS message. So first of all why is the asp.net handler picking up a request for '..' (also works with one or more trailing dots) If I remove the following handler from applicacationHost.config: <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" responseBufferLimit="0" /> I get my expected custom 404 page, but of course removing that handler breaks routing in asp.net among other things. Looking at the failure trace I see: Windows Authentication is disabled for the site, so why is that module even in the request pipeline? For now my fix is to use the URL Rewrite module with the following rule: <rewrite> <rules> <rule name="Trailing Dots" stopProcessing="true"> <match url="\.+$" /> <action type="Rewrite" url="/404.html" appendQueryString="false" /> </rule> </rules> </rewrite> This works okay, but I wonder why IIS/ASP.NET behaves this way?

    Read the article

  • RAID 10: SPAN 2 vs SPAN 4

    - by LaDante Riley
    I am currently configuring RAID 10 (first time doing RAID ever) for a server at work. In the Configuration Utility. I am given the option of either span 2 or span 4. Having never done this before, I was curious if someone could tell me the pros and cons of for each span? Thanks The server is a Poweredge r620 with a PERC H710 mini (Security Capable) RAID controller. I have 8 600GB hard drives. I am creating this server as a network storage drive. I have SQL server historian database whose 1TB storage filled up and after 5 years of logging data.

    Read the article

  • install headers-more-nginx-module to existing nginx

    - by Hunt
    i have installed nginx but now i want to install headers-more-nginx-module into existing installation of nginx , so can someone tell me how to do it ? i have found following commands but it is with the new installation of nginx and then headers-more-nginx-module wget 'http://nginx.org/download/nginx-1.2.4.tar.gz' tar -xzvf nginx-1.2.4.tar.gz cd nginx-1.2.4/ # Here we assume you would install you nginx under /opt/nginx/. ./configure --prefix=/opt/nginx \ --add-module=/path/to/headers-more-nginx-module make make install currently i guess my nginx is under where the nginx.conf is placed etc/nginx

    Read the article

  • Windows 2003 to log on no matter how it turned off

    - by Arya
    I have a windows server 2003, which I use Teamviewer to connect to. When there is a power failure, and the computer restarts, it will not load Teamviewer. It gets stuck on a screen which asks why the computer was turned off. There is no keyboard or mouse connected to the server and I need Teamviewer to open every time. I don't want the screen asking why it was turned off to show up ever. I turned off Shutdown Event Tracker by doing the following: Open gpedit.msc Computer Configuration Administrative Templates System click Display Shutdown Event Tracker and then selected the disable radio button. Is this enough to prevent that screen to show up again? or is there anything else I need to change?

    Read the article

  • Ubuntu 12.04 host lookups extremely slow

    - by tubaguy50035
    I'm having issues with one of my servers taking a long time to look up host names. This is an Ubuntu 12.04 box, so I've tried following the new resolvconf directives. In my /etc/network/interfaces file, I defined my name servers like this: auto eth0 iface eth0 inet static address someaddress netmask 255.255.255.0 gateway 198.58.103.1 dns-nameservers 74.14.179.5 72.14.188.5 In my /etc/resolv.conf, I see these name servers, like this: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 74.14.179.5 nameserver 72.14.188.5 On another box, I edited the resolv.conf directly as directed by my hosts' setup help files. It looks like this: domain members.linode.com search members.linode.com nameserver 72.14.179.5 nameserver 72.14.188.5 options rotate This second box has no issues with host name look ups and responds quite quickly. Could not having the domain and search directives make my look ups slow? By slow, I mean it's taking anywhere from 5 to 15 seconds to find the IP address of a host.

    Read the article

  • Routing with VPN and asymmetric communication

    - by Louis
    I'm stumbling on a problem that requires your advice. Keywords : networking, route, openVPN Problem : I have a local network with several physical servers and VMs. These machines have ip's in the range 10.10.x.x. I can access these machines from the Internet with the help of openVPN. These machines can : access each other within the local 10.10.x.x subnet access the Internet via the VPN can themselves be accessed (via SSH) from the Internet via the VPN. There is one machine however that behaves strangely and I don't know why. I can SSH into this machine from anywhere via SSH and I can also PING it from anywhere (including the Internet). However from this machine (i.e. when logged into it) I cannot access the Internet or ping machines outside the local network. In other words it will not go beyond the VPN. My question is why? Here are some technical details: The machine's Network Config (running Debian 6.0.3): allow-hotplug eth0 iface eth0 inet static address 10.10.10.200 netmask 255.255.0.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.200 The machine's Routing : Destination Gateway Genmask Flags MSS Window irtt Iface 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.10.0.0 10.10.10.250 255.255.0.0 UG 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.10.10.200 0.0.0.0 UG 0 0 0 eth0 The VPN's Network Config (running Debian 6.0.3): # This is the local network interface auto eth1 allow-hotplug eth1 iface eth1 inet static address 10.10.10.250 netmask 255.255.0.0 broadcast 10.10.10.255 gateway 10.10.10.250 The VPN's routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0 private 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 private 0.0.0.0 UG 0 0 0 eth0 net.ipv4.ip_forward = 1 on both machines. there are no iptables set anywhere. Thanks in advance for any feedback.

    Read the article

  • SVN Active Directory authentication with ProxyPass redirect in the mix

    - by Jason B. Standing
    We have a BitNami SVN stack running on a Windows machine which holds our SVN repository. It's set up to authenticate against our AD server and uses authz to control rights. Everything works perfectly if Tortoise points at http://[machine name]/svn However - we need to be able to access it from http://[domain]/svn. The domain name points to a linux environment that we're decommissioning, but until we do, other systems on that box prevent us from just re-pointing the domain record. Currently, we've got a ProxyPass record on the linux machine to forward requests through to http://[machine name]/svn - it seems to work fine, and the endpoint machine asks for credentials, then authenticates: but when that happens, the access attempt is logged as coming from the linux box, rather than from the user who has authenticated. It's almost like some element of the credentials aren't being passed through to the endpoint machine. Has anyone done this before, or is there other info I can give to try to make sense of this problem, and figure out a way to solve it? Thankyou!

    Read the article

  • Outlook displaying sender incorrectly

    - by Devnull
    In one user mailbox anything sent from any distribution list in our domain shows up as being from 'System Administrator'. It only happens when they are viewing the inbox using outlook (OWA is not affected), and its persisted across computers (though it did not happen immediately). When other users view the inbox from their outlook install (ie open users folder), everything appears as normal. Other folders are not affected. if a message is moved into a subfolder, the sender displays properly. Because of the persistence, and it only affecting one user, I suspect some user behavior is causing this, but i cannot determine what. Ive checked the contact list, and its not that.

    Read the article

  • VMWare/Ubuntu development stack with symlink to Windows 7

    - by wdhilliard
    I would like to use a vm ubuntu installation as my testing environment, but to ease workflow, I have symlinked /var/www to a windows share. Everything looks good when browsing files and the owner and group both are showing up as www-data, but I can not seem to get apache to respond with anything other than permission denied. Obviously there are still some permission issues between Windows 7 and Ubuntu, but I don't know where to go

    Read the article

  • Move files contained in a certain dir to the previous one (centOS)

    - by Alex
    i will try to explain my problem (sorry for my bad english). I have an image gallery with a directory structure like that: images/dir1/subdir1/IMG/files.jpg images/dir1/subdir2/IMG/files.jpg images/dir1/subdir3/IMG/files.jpg images/dir2/subdir1/IMG/files.jpg ....... images/dir109/subdir1/IMG/files.jpg the directory named images contains 109 dirs (dir1,dir2, ... dir109), the 109 dirs totally have 1200 subdirs inside, every subdir contain a dir named IMG with images into it (file1.jpg file2.jpg etc ...), i would like to move all the images contained into every dir named IMG into the previous dir (subdir) to have something like that: images/dir1/subdir1/file1.jpg images/dir1/subdir1/file2.jpg images/dir1/subdir2/file1.jpg ........ images/dir109/subdir1/file.jpg

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Is it possible to run multiple mongod instances on a single set of database files

    - by 9point6
    We have large multi-gigabyte data sets on which we run very complex queries, for example { $or: [ { id: 30000001, ... }, { id: 30000005, ... }, ..., { id: 30001005, ... } ] } It seems that CPU is actually a bottleneck at this point, so I'd be advantageous to be able to run multiple mongod instances on the same set of database files. We've considered using replica sets to this end, but would prefer to not require the extra disk space simply for CPU reasons.

    Read the article

  • IIS 7 and ASP.NET State Service Configuration

    - by Shawn
    We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that. I've followed these instructions to no avail. The session still isn't being shared between the two servers. I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue? Update: I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

    Read the article

  • Problems mounting HPUX LVM+VXFS filesystem on Linux

    - by golimar
    I have a physical disk from a HPUX system that I need to access from a Debian Linux for ia64 system. From the hpux-lvm-tools project I have the tools to access the HPUX LVMs (Linux LVM has a different format) and I also have the freevxfs driver. I know beforehand that the disk has three partitions, and that the biggest one contains LVM volumes, and some of those are VxFS filesystems. I can see the partitions: # cat /proc/partitions major minor #blocks name 8 32 143374744 sdc 8 33 512000 sdc1 8 34 142452736 sdc2 8 35 409600 sdc3 It finds a VG in one of the disk partitions: # ./vgscan_hpux On /dev/sdc2 - vg1328874723 # ./pvdisplay_hpux /dev/sdc2 PV General Information ---------------------- VG Creation Time Fri Feb 10 12:52:03 2012 Physical Volume ID 1766760336 1328874723 Volume Group ID 1766760336 1328874723 Physical Volumes in VG 1766760336 1328874723 VG Actication Mode 0 - LOCAL PE Size 64 MBs Lvol sizes ---------- lvol1 - 8 Extents - 512 MBs lvol2 - 192 Extents - 12288 MBs lvol3 - 16 Extents - 1024 MBs ... lvol21 - 13 Extents - 832 MBs lvol22 - 224 Extents - 14336 MBs lvol23 - 16 Extents - 1024 MBs Then I activate that VG and some new devices appear in my system: # ./pvactivate_hpux /dev/sdc2 VG vg1328874723 Activated succesfully with 23 lvols. # # ll /dev/mapper/ total 0 crw------- 1 root root 10, 59 Nov 26 16:08 control lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol10 -> ../dm-9 ... lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol8 -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol9 -> ../dm-8 But: # mount /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: you must specify the filesystem type # mount -t vxfs /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1328874723-lvol18, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # lsmod |grep vxfs freevxfs 23905 0 I also tried to identify the raw data with the file command and it just says 'data': # file -s /dev/mapper/vg1328874723-lvol18 /dev/mapper/vg1328874723-lvol18: symbolic link to `../dm-17' # file -s /dev/dm-17 /dev/dm-17: data # Any clues?

    Read the article

  • Running SQL*Plus with bash causes wrong encoding

    - by Petr Mensik
    I have a problem with running SQL*Plus in the bash. Here is my code #!/bin/bash #curl http://192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql > script.sql wget -O script.sql 192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql set NLS_LANG=_.UTF8 sqlplus /nolog << ENDL connect login/password set sqlblanklines on start script.sql exit <<endl I download the insert statements from our intranet, put it into sql file and run it through SQL*Plus. This is working fine. My problem is that when I save the file script.sql my encoding goes wrong. All special characters(like íášc) are broken and that's causing inserting wrong characters into my DB. Encoding of that file is UTF-8, also UTF-8 is set on the XSQL page on our intranet. So I really don't know where could be a problem. And also any advices regarding to my script are welcomed, I am total newbie in Linux scripting:-)

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendation?

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • taskkill - end tasks with window titles ending with a specific string

    - by DBZ_A
    I need to write a batch program to end all MS office communicator tasks with window titles (usually ending with pattern "- Conversation" . I tried taskkill /FI "WINDOWTITLE eq *Conversation" /IM communicator.exe but the wildcard pattern starting with a '*' does not seem to work. Gives the folowing error ERROR: The search filter cannot be recognized. any suggestions for a workaround would be greatly appreciated!

    Read the article

  • iTerm2 vim cannot map alt key

    - by Eddy
    I'm having trouble trying to map the alt-key bindings on vim in iTerm2. I want to map shortcuts for switching between buffers like this: map <A-Right> <C-w>l map <A-Left> <C-w>h map <A-Down> <C-w>j map <A-Up> <C-w>k But I can't get it to work. I've tried everything, setting the option key as "Normal", "Meta" and "+Esc" in the profile settings. I've tried <M-Right> and <T-Right> but those don't work either. There are posts on superuser and stackoverflow but they use the old version of iTerm2 (v0.x). The only things I've managed to get working are <T-up> and <T-down>, or when I just use Macvim. I'm using iTerm2 v1.0.0.20120203, and Mac OS X 10.7.5 on a Macbook Pro.

    Read the article

  • 2D animations for Android, IOS, web

    - by David Phillips
    Working on Windows 7 platform, I'm interested in creating an app that could run on Android devices (phone or tablet), IOS devices (iPhone, iPad), and the web. First target would be Android. Do you remember those old Arthur Murray footstep diagrams to teach dancing? I want to do the same thing, but animate the steps, so the user can play them all, as well as step forward or backward thru them at their own pace. Perhaps one could generate an animated GIF and then play it on the platform. So there would seem to be two parts to this: 1) A good way to generate the animation images. I can, for example, see using SketchUp - something I know - for this. But is there something better it would be worth investing in? And, 2) How to play the result, including options for playback speed control, and even forward/backward stepping? Ideally a single, or easily adaptable player to the three different OS platforms.

    Read the article

  • using virtual machine like mySql server

    - by ffmm
    i'm developing a java program and i need a database. Now i'm using MAMP and it's pretty easy but i would have a virtual machine (ubuntu server) and i need to connect my java program with this virtual machine using vitualBox. the situation: I installed VirtualBox on my mac and I installed an ubuntu-server machine set "bridge adapter" in the network settings of VB I installed mysql on ubuntu-server and i created a simple database (all work well by ubuntu) doing ifconfig by ubuntu I get the ip: 192.168.1.217 so in the java program i made this function: public static Connection connect(String host, int port, String dbName, String user, String passwd) { Connection dbConnection = null; try { String dbString = null; Class.forName("com.mysql.jdbc.Driver").newInstance(); dbString = "jdbc:mysql://" + host + ":" + port + "/" + dbName; dbConnection = DriverManager.getConnection(dbString, user, passwd); } catch (Exception e) { System.err.println("Failed to connect with the DB"); e.printStackTrace(); } return dbConnection; } and in the main() i use: Connection con = connect(1, "192.168.1.217", 3306, "Ciao", "root", "cocacola"); 3306 was a default value. I don't know if is correct, it works on mamp, but…. how I can find the correct port that I have to use with VB? when I ran the program I get the catch excepion… what's wrong? ps: i have to install apache o something else?

    Read the article

  • Strange sound from Laptop, Microphone, Speakers or Soundcard?

    - by David Schilling
    I have recently purchased a second hand computer. There didn't seem to be anything wrong with it on purchasing, however recently I have noticed a small issue. Any time I play sound or listen through my headphones, I can hear what seems to be the CPU working (ie when beginning a new task, there is a small crackling). Also if I tap my laptop I can hear the sound louder, as if it is being picked up by the mic and then playing that sound in real time through the speakers or headphones. I have tried reading other topics but cannot find an answer to this question. These are the specs I am running on; Windows 7 Home Premium 32bit Pentium Dual Core CPU T4300 @ 2.10Ghz 2.10Ghz I hope someone can help me with this issue, it would be much appreciated.

    Read the article

  • Cannot connect to IIS 7 from localhost

    - by Wout
    I cannot connect to the local IIS 7 using "http://localhost" in IE. "http://127.0.0.1" doesn't work either. The strange thing is that if I add a binding on e.g. port 81, then I can reach "http://localhost:81". Also turning off the firewall on the local machine doesn't help. The site is reachable from the internet. The local requests don't seem to hit IIS (no entries in the IIS log files). IIS is hosted on Windows Server 2008 R2 from behind a hardware firewall device. Note that I'm a programmer, not a network administrator, so I'm having a hard time trouble shooting this.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >