Search Results

Search found 155 results on 7 pages for 'mathias nielsen'.

Page 3/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • SMTP port open - but not open

    - by Frederik Nielsen
    As some of you might know, I am setting up an exchange server. Now I ran into another problem: I cannot connect to the SMTP service from outside the server! The ports are opened in the gateway device (a ZyXEL USG50), Windows firewall is off. I see the packets travekl through the ZyXEL firewall, and I can also see the packets with wireshark on the server, so I know they are getting all the way in to the server. I also know it receives them, and sends out the reply - and this is where things go bad! Analyzing with wireshark, I get these errors in the return packets: Header checksum: 0x0000 [incorrect, should be 0x0779 (may be caused by "IP checksum offload"?)] And: Acknowledgment Number: 0x8e3337d1 [should be 0x00000000 because ACK flag is not set] What the (sorry my French) hell is going on? I really cant figure it out.. Thanks in advance.

    Read the article

  • Exchange 2010 SP2 OWA performance

    - by Frederik Nielsen
    How do I increase performance in OWA 2010 SP2? I am running CAS on a seperate installation, which has 8GB RAM and 4 CPU cores - running virtualized in a vmware environment. However, the load times are pretty bad, so is there any way to improve those? I am thinking of installing a linux cache-stuff-server in front of the OWA, but will that work? And how should it be done? Allright, I "fixed" it - was just something temporary issue. Thanks for your replies

    Read the article

  • VNC application/terminal server

    - by sebastian nielsen
    Which software should I use, if I want to set up a linux VNC terminal server that works in this way: The VNC server should be able to accept up to X simultanous connections on the same port 5900. The VNC server should use 640x480 on 8 or 16bit color. When the VNC server receives the connection, it should start a new "session" for a user, and auto-launch a specific linux application for that user. If the application is killed, crashes, or is exited in any way, user should be disconnected (kicked) from server. If the user disconnect, the application should be killed in a "graceful way", that allows the application to cleanup. (There should be no way to "pick up" a old session) Any ideas?

    Read the article

  • Exchange 2010 outlook anywhere - shows internal URL

    - by Frederik Nielsen
    I am setting up an Exchange 2010 SP2 for a customer. However, the server address that the server returns with autodiscover is wrong, as it points to the internal domain (.local) - and not the external address. How do I change this? Here's an image to describe what I mean: It is the upper field that is wrong. I dont want users to enable the RPC over HTTP-thing, as the users know barely nothing about computers. Thank you in advance.

    Read the article

  • Config postGreSQL pg_hba.conf restric role access

    - by Mathias
    Hello postgre experts. I am completely new to the game but need the following: I Create a new role with login. Let's say: User1 I then create a Database 'User1Database' and set User1 as the owner. User1 has no rights to do anything except for access. Now when I connect using User1 it somehow has access to all databases. I then learned I neeed to write something in here. User1 should have global access to User1Database and absolutely no access to anything else. What lines do I need to add to my pg_hba file? Currently it looks like this: # IPv4 local connections: host all all 127.0.0.1/32 md5 # IPv6 local connections: host all all ::1/128 md5 host all all 0.0.0.0/0 md5 Hope someone can write me the exact lines and explain them to me.

    Read the article

  • MySQL replicate multiple places

    - by Frederik Nielsen
    Very trick task to find a good title for this question, but here goes the q: I have a few development machines, where I develop my PHP applications on, and testing via a local webserver. This works out pretty well for each machine. However, I would like to replicate the DB from my machines to a central location. So, to sum up: DEV1 - CENTRAL DEV2 - CENTRAL DEV3 - CENTRAL CENTRAL - DEV1 CENTRAL - DEV2 CENTRAL - DEV3 I hope this makes sense, as I cannot find an easy way to tell it. Basically, it is a 2-way replication, where all 4 databases contain the same info, and each of them can be updated locally, to then be pushed out to the others. Is this actually doable? All my dev machines are running Windows 7, and my central DB server is running CentOS 6.

    Read the article

  • Problems forwarding zone to another DNS server.

    - by sebastian nielsen
    I have a authorative DNS server at 83.248.21.18 which are authorative for the domain "finahemgoteborg.se". Now my registrar is requiring me to have 2 DNS servers for the domain, so I would now want the machine 85.228.103.141 just forward all incoming queries for "finahemgoteborg.se" to the 83.248.21.18 server. In the 85.228.103.141 BIND server, I have the following config: zone "finahemgoteborg.se" in { type forward; forwarders {83.248.21.18;}; }; But the problem is that 85.228.103.141 is still responding with "REFUSED" when querying it for example www.finahemgoteborg.se A record. How can I fix it. I do NOT want to set up a master/slave situation, just one nameserver that forwards to a another. Edit The Rest of named.conf: options { directory "/var/cache/bind"; version "none"; allow-recursion {"none";}; minimal-responses no; }; zone "sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns1sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns2sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "finahemgoteborg.se" in{ type forward; forwarders {83.248.21.18;}; };

    Read the article

  • What are these? Are they broken?

    - by Chris Nielsen
    Please excuse the poor image quality: What are the components that I've circled in red? The ones on the left look whole and solid. The ones on the right have cracked tops, and although this picture doesn't show it, there are small brown threads coming out of the top. Are the cracked ones broken, or is that supposed to happen? If they ARE broken, is this something I should worry about? This is a video card, and it appears to be fully functional: I'm using it while writing this post.

    Read the article

  • Tomcat OutOfMemory after switching JVM

    - by Mathias
    I have a Tomcat6 server running on Debian squeeze there are 4 webapps running on it, and an in-JVM ActiveMQ server. It has been running for about a year with the same memorysettings, with openjdk-6. Everything has worked dandy, no issues at all. Now, for various reasons, i need to try out Sun's JDK. So, I installed sun's jvm with standard apt-get apt-get install sun-java6-bin , and switched using update-java-alternatives -s java-6-sun However, when i start tomcat, i get outofmemory, the server won't even start! If i switch back to openJDK, all works fine again. I haven't had any memory issues on this server before, so it feels strange that the server suddenly won't start with sun's JDK. Anybody have any clue as to why this might happen? Have i missed something? I naturally have set heap sizes etc. in tomcat. Currently running with: -Xms256m -Xmx1024m Which as mentioned works in openSDK, outofmemory in sun-jdk... EDIT: server is 64-bit, openJDK and Sun are 1.6.0, both 64-bit JVMs.

    Read the article

  • Backup script to FTP with timed subfolders

    - by Frederik Nielsen
    I want to make a backup script, that makes a .tar.gz of a folder I define, say fx /root/tekkit/world This .tar.gz file should then be uploaded to a FTP server, named by the time it was uploaded, for example: 07-10-2012-13-00.tar.gz How should such backup script be written? I already figured out the .tar.gz part - just need the naming and the uploading to FTP. I know that FTP is not the most secure way to do it, but as it is non-sensitive data, and FTP is the only option I have, it will do. Edit: I ended up with this script: #!/bin/bash # have some path predefined for backup unless one is provided as first argument BACKUP_DIR="/root/tekkit/world/" TMP_DIR="/tmp/tekkitbackup/" FINISH_DIR="/tmp/tekkitfinished/" # construct name for our archive TIME=$(date +%d-%m-%Y-%H-%M) if [ $1 ]; then BACKUP_DIR="$1" fi echo "Backing up dir ... $BACKUP_DIR" mkdir $TMP_DIR cp -R $BACKUP_DIR $TMP_DIR cd $FINISH_DIR tar czvfp tekkit-$TIME.tar.gz -C $TMP_DIR . # create upload script for lftp cat <<EOF> lftp.upload.script open server user user password lcd $FINISH_DIR mput tekkit-$TIME.tar.gz exit EOF # start backup using lftp and script we created; if all went well print simple message and clean up lftp -f lftp.upload.script && ( echo Upload successfull ; rm lftp.upload.script )

    Read the article

  • Postfix message ID originating process?

    - by Anders Braüner Nielsen
    Last night my postfix mail server(Debian Squeeze with dovecot, roundcube, opendkim and spamassassin enabled) started sending out spam from a single domain of mine like these: $cat mail.log|grep D6930B76EA9 Jul 31 23:50:09 myserver postfix/pickup[28675]: D6930B76EA9: uid=65534 from=<[email protected]> Jul 31 23:50:09 myserver postfix/cleanup[27889]: D6930B76EA9: message-id=<[email protected]> Jul 31 23:50:09 myserver postfix/qmgr[7018]: D6930B76EA9: from=<[email protected]>, size=957, nrcpt=1 (queue active) Jul 31 23:50:09 myserver postfix/error[7819]: D6930B76EA9: to=<[email protected]>, relay=none, delay=0.03, delays=0.02/0/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta5.am0.yahoodns.net[66.196.118.33] while sending RCPT TO) The domain in question did not have any accounts enabled but only a catchall alias set through postfixadmin - most emails were send from a specific address I use frequently but some were also sent from bogus addresses. None of the other virtual domains handled by postfix were affected. How can I find out what process was feeding postfix/sendmail or more info on where they originated? As far as I can tell php mail() wasn't used and I've run several open relay tests. I did a little tinkering(removed winbind from the server and ipv6 addresses from main.cf) after the attack and it seems to have subsided but I still have no idea how my server was suddenly sending out spam. Maybe I fixed it - maybe I didn't. Can anyone help figuring out how I was compromised? Anywhere else I should look? I've run Linux Malware Detect on recently changed files but nothing found.

    Read the article

  • Not getting IP from ISP on Multicast Network

    - by Johan Nielsen
    Im having an odd issue with my ISP (COMX.dk) I have a managed access gateway box (Telsay) with three 8P8C ports for use with Internet and Ip-Tv (respectively on different VLANS (so does my ISP tell me)) To utilize a port you will need to register your device's mac address through an online interface. You will then get your device paired with a static ip. I am using one port actively and I have registered another device (router). The router is configured to listen for an active dhcpd on the network. When my router get a lease I get a private ip 192.168.2.2 (not the one bound to my mac) which is odd! I unconnected my router from the gateway and connected my laptop directly. Same thing happened - I was given a private address. I did a port scan on the gateway and found port 80 to be open and browsed to the ip. I was then presented with a management interface of a Belkin wireless router (HMMM!!!!) <--by the way, not my gear At this point I called the ISP to let them know of my issue/findings - Only to be replied "Well, we cant see any rogue dhcp servers" (thinking to myself, well I can) I then decided that it could be fun to try the other port of my gateway, only to experience the same. So I reconnected my router and used the remaining port to make an observer(wireshark promic etc.) I am able to see my router trying to discover a dhcp server but I can also see my ISP's IGMP and PIMv2 packages just repeating the same pattern. Hello...Hello...Hello :) So I called them again, only to get the same response, "we dont see any rogue dhcp's...we cant see the host you are talking to (mac address of the Belkin router)...you are definitively connected through wireless?!?(no im not, no such thing as a wireless wire - i thought to myself)" My questions is, What is going on? (besides from what im reporting here) What am I seeing that the don't? What can I tell them in order for them to resolve mine/their issue?

    Read the article

  • Exchange 2010 certificate errors

    - by Frederik Nielsen
    I have a problem with my newly setup Exchange environment for our hosted customers. First off, when configuring the outlook client, it gives a certificate warning although the certificate has been bought and setup. I am using a setup like this: autodiscover.CUSTOMERDOMAIN.TLD CNAME autodiscover.exchange.COMPANYDOMAIN.TLD (Companydomain is our company that hosts the exchange servers, customerdomain being the customers domain) Shouldn't that work? I know that Microsoft does something like that for Office365, but I really don't think they buy a certificate for every customer.. So I guess some redirection should be setup somehow - any guidance? Next thing: When we accept that error, and move on to actually starting Outlook, it states that the certificate is not valid for the RPC proxy server exchange.COMPANYDOMAIN.TLD - this domain is not right, as that domain is not included in the certificate. I would instead like this domain to be mail.exchange.COMPANYDOMAIN.TLD I tried to run this script setting both internal and external URL's to be the same, with no luck. Any guidance on this one? I am running Exchange 2010 SP2, with CAS, HT and MBX split up on 3 different servers.

    Read the article

  • ESXi5 - management services crashes - vms running

    - by Frederik Nielsen
    I have a setup with two ESXi5 servers. We are(were) running with a ISCSi box to server disk for the VM's - however we are in the progress of migrating away from it, because the storage os disk is bad. Now, one of the ESXi hosts has been running for ~20hrs, and it seems like the management services just crashed on that host.. The vms are still running - so it's not really serious. However, I want to fix it. Should I be worried? Will the VM's keep running? The hosts does respond on pings. I am running a vcenter to administrate the hosts. Thanks in advance.

    Read the article

  • How to use dedicated video card instead of onboard?

    - by Mathias Lykkegaard Lorenzen
    I tried running DxDiag (DirectX diagnostics), and I noticed that my graphics card is set to the onboard one that comes with the Core i5 processor (some Intel HD stuff). On my computer, I also have a dedicated graphics card (an Nvidia 310). No serious gaming stuff, I know - just for programming. However, I would still love to know how to switch to that dedicated graphics card instead. My laptop is an MSI CX720.

    Read the article

  • Force delivery retry without restarting the SMTP Service on Windows Server 2008 R2

    - by Mathias R. Jessen
    I have a Windows Server 2008 R2 box hosting 3 virtual SMTP servers; vSMTP01, vSMTP02 and vSMTP03. The first two are configured to deliver all messages to dedicated smarthosts, while the last is set to just deliver the messages on its own. All other delivery settings are as default ----(vSMTP01)-----> {SMARTHST01} / ----Inbound mail--->---SMTPSRV01---[----(vSMTP02)-----> {SMARTHST02} \ ----(vSMTP03)-----> { Internet } Now I want to take SMARTHST01 out for maintenance, but I don't want to reject submissions to vSMTP01 while doing so, so I just let it continue running. When SMARTHST01 is no longer responding, vSMTP01 queues the messages and wait for the first retry interval to pass (15 minutes). So far so good. Let's say SMARTHST01 gets online again after 20 minutes. The first interval has passed, and I'll have to wait another 25 minutes for the second retry interval to pass. If I stop and start the SMTP Service (Services.msc - Simple Mail Transfer Protocol service - Stop), the server will retry all deliveries, but that would cause a service interruption for ALL virtual SMTP servers on the machine, which is highly undesirable. How can I manually force vSMTP01 to retry delivery of all queued messages without interrupting the service of vSMTP02 and vSMTP03?

    Read the article

  • Backlight dimming don't work

    - by Mathias
    My Packard Bell EasyNote TS11HR notebook does not have an option for dimming the display backlight. At night, my eyes begin to hurt because of the strong light from the screen. My laptop is 2-3 months old and I am sure it has worked before. When I click on the battery icon in the notification area, it says in my language (Danish): "the setting for light does possibly reduce the life of the battery". However, I cannot dim the backlight. I have tried downloading programs for dimming the screen but they only make the screen darker, instead of dimming the backlight. I have tried updating my drivers and looking in the BIOS for a setting. I also plan to use an Ubuntu LiveCD to try controlling it. As of now though, the backlight is locked at maximum. Any ideas?

    Read the article

  • How to use dedicated video card instead of onboard?

    - by Mathias Lykkegaard Lorenzen
    Hi there! I tried running DxDiag (DirectX diagnostics), and I noticed that my graphics card is set to the onboard one that comes with the Core i5 processor (some Intel HD stuff). On my computer, I also have a dedicated graphics card (an Nvidia 310). No serious gaming stuff, I know - just for programming. However, I would still love to know how to switch to that dedicated graphics card instead. My laptop is an MSI CX720.

    Read the article

  • Ubuntu 12.10 sources.list empty after install

    - by Martin Nielsen
    I recently installed the Ubuntu 12.10 server version from a USB stick. The step "Install additional software" or whatever keeps failing, so i though screw it and continued. Everything else worked like a charm. I thought. Turns out, the only two entries in my sources.list are the install CD. This means that i have no way of getting a.. well.. anything installed. Can someone give me a short list of repositories that i need so i can put them in the file? And on a similar note: What is the comment character for the sources list? #?

    Read the article

  • Can't access server from external IP

    - by Mathias
    I have a problem with my web server; I can't access it from the external IP address. I'm using an IIS 7 server, but I've tried with apache on Linux as well. I have forwarded all traffic on port 80 to my computer, but it just won't work. I've done port forwarding with my Minecraft server, and it did work, but when I try it with a web server, no. I've been looking on many many forums, but their methods don't work for me. My router is a Speedport W 723V, if anyone knows that one. Any help is appreciated.

    Read the article

  • Book &ldquo;Team Foundation Server 2012 Starter&rdquo; published

    - by terje
    During the summer and fall this year, me and my colleague Jakob Ehn has worked together on a book project that has now finally hit the stores! The title of the book is Team Foundation Server 2012 Starter and is published by Packt Publishing. Get it from http://www.packtpub.com/team-foundation-server-2012-starter/book or from Amazon http://www.amazon.com/dp/1849688389                     The book is part of a concept that Packt have with starter-books, intended for people new to Team Foundation Server 2012 and who want a quick guideline to get it up and working.  It covers the fundamentals, from installing and configuring it, and how to use it with source control, work items and builds. It is done as a step-by-step guide, but also includes best practices advice in the different areas. It covers the use of both the on-premises and the TFS Services version. It also has a list of links and references in the end to the most relevant Visual Studio 2012 ALM sites. Our good friend and fellow ALM MVP Mathias Olausson have done the review of the book, thanks again Mathias! We hope the book fills the gap between the different online guide sites and the more advanced books that are out. Book Description Your quick start guide to TFS 2012, top features, and best practices with hands on examples Overview Install TFS 2012 from scratch Get up and running with your first project Streamline release cycles for maximum productivity In Detail Team Foundation Server 2012 is Microsoft's leading ALM tool, integrating source control, work item and process handling, build automation, and testing. This practical "Team Foundation Server 2012 Starter Guide" will provide you with clear step-by-step exercises covering all major aspects of the product. This is essential reading for anyone wishing to set up, organize, and use TFS server. This hands-on guide looks at the top features in Team Foundation Server 2012, starting with a quick installation guide and then moving into using it for your software development projects. Manage your team projects with Team Explorer, one of the many new features for 2012. Covering all the main features in source control to help you work more efficiently, including tools for branching and merging, we will delve into the Agile Planning Tools for planning your product and sprint backlogs. Learn to set up build automation, allowing your team to become faster, more streamlined, and ultimately more productive with this "Team Foundation Server 2012 Starter Guide". What you will learn from this book Install TFS 2012 on premise Access TFS Services in the cloud Quickly get started with a new project with product backlogs, source control, and build automation Work efficiently with source control using the top features Understand how the tools for branching and merging in TFS 2012 help you isolate work and teams Learn about the existing process templates, such as Visual Studio Scrum 2.0 Manage your product and sprint backlogs using the Agile planning tools Approach This Starter guide is a short, sharp introduction to Team Foundation Server 2012, covering everything you need to get up and running. Who this book is written for If you are a developer, project lead, tester, or IT administrator working with Team Foundation Server 2012 this guide will get you up to speed quickly and with minimal effort.

    Read the article

  • Canonical abandonne OpenOffice.org pour LibreOffice dans la prochaine version de Ubuntu

    Canonical abandonne OpenOffice.org pour LibreOffice Dans la prochaine version de Ubuntu La prochaine version du système d'exploitation Open-Source Ubuntu intégrera désormais la suite bureautique Libre Office en lieu et place d'OpenOffice.org d'après un message de Mathias Klose, membre de l'équipe de développement d'Ubuntu. La suite bureautique Libre Office est un fork du projet OpenOffice.org mis sur pied par la Document Foundation suite à des désaccords avec Oracle après le rachat de Sun. La Document Foundation, créée par des membres de la communauté OpenOffice.org, avait reçu le soutien de Google et Red Hat et

    Read the article

  • Rocky Mountain Tech Trifecta v3.0

    - by Jeff Certain
    The Rocky Mountain Tech Trifecta is an annual event held in Denver in late February or early March. The last couple of these have been amazing events, with great speakers like Beth Massi, Scott Hanselman, David Yack, Kathleen Dollard, Ben Hoelting, Paul Nielsen… need I go on? Registration is open at http://www.rmtechtrifecta.com. The speaker list hasn’t been finalized, but it’s sure to be another great event. Don’t miss it!

    Read the article

  • Introduction To Web And Flash Design

    An Introduction to Flash and Web Design This is the time of the Internet. People from all parts of the world make use of the Internet as a method to sell, buy, learn, advertise, and for many other f... [Author: Max Nielsen - Web Design and Development - August 24, 2009]

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >