Daily Archives

Articles indexed Wednesday October 24 2012

Page 13/17 | < Previous Page | 9 10 11 12 13 14 15 16 17  | Next Page >

  • Extending Expression Blend 4 &amp; Blend for Visual Studio 2012

    - by Chris Skardon
    Just getting this off the bat, I presume this will also work for Blend 5, but I can’t confirm it… Anyhews, I imagine you’re here because you want to know how to create an addin for Blend, so let’s jump right in there! First, and foremost, we’re going to need to ensure our development environment has the right setup, so the checklist: Visual Studio 2012 Blend for Visual Studio 2012 OK, let’s create a new project (class library, .NET 4.5): Hello.Extension The ‘.Extension’ bit is very very important. The addin will not work unless it is named in this way. You can put whatever you want at the front, but it has to have the extension bit. OK, so now we have a solution with one project. To this project we need to add references to the following things: Microsoft.Expression.Extensibility (from c:\program files\Microsoft Visual Studio 11.0\Blend\   -- x86 folder if you are on an x64 windows install) Microsoft.Expression.Framework (same location as above) PresentationCore PresentationFramework WindowsBase System.ComponentModel.Composition Got them? ACE. Let’s now add a project to contain our control, so, create a new WPF Application project, cunningly named something like ‘Hello.Control’… (I’m creating a WPF application here, because I’m too lazy to dig up the correct references, and this will add all the ones I need ) Once that is created, delete the App.xaml and MainWindow.xaml files, we won’t be needing them. You will also need to change the properties of the project itself, so it is only a class library. Once that is done, let’s add a new UserControl, which will be this: <UserControl x:Class="Hello.Control.HelloControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock Text="HELLO!!!"/> </Grid> </UserControl> Impressive eh? Now, let’s reference the WPF project from the Extension library. All that’s left now is to code up our extension… So, add a class to the Extension project (name wise doesn’t matter), and make it implement the IPackage interface from the Microsoft.Expression.Extensibility library: public class HelloExtension : IPackage { /**/ } We’ll implement the two methods we need to: public class HelloExtension : IPackage { public void Load(IServices services) { } public void Unload() { } } We’re only really concerned about the Load method in this case, as let’s face it, the extension we have doesn’t need to do a lot to bog off. The interesting thing about the Load method is that it receives an IServices instance. This allows us to get access to all the services that Expression provides, in this case we’re interested in one in particular, the ‘IWindowService’ So, let’s get that bad boy… private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); } Nailed it… But why? The WindowService allows us to register our UserControl with Blend, which in turn allows people to activate and see it, which is a big plus point. So, let’s do that… We’ll create an ‘Initialize’ method to create our new control, and add it to the WindowService: private HelloControl _helloControl; public void Initialize() { _helloControl = new HelloControl(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _helloControl, "Hello Window"); } First we check that we’re not already registered, and if we’re not we register, the first argument is the identifier used by the service to, well, identify your extension. The second argument is the actual control, the third argument is the name that people will see in the ‘Windows’ menu of Blend itself (so important note here – don’t put anything embarrassing or (need I say it?) sweary…) There are only two things to do now - Call ‘Initialize()’ from our Load method, and Export the class This is easy money – add [Export(typeof(IPackage))] to the top of our class… The full code will (should) look like this: [Export(typeof (IPackage))] public class HelloExtension : IPackage { private HelloControl _helloControl; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } public void Unload() { } public void Initialize() { _helloControl = new HelloControl(); if (_windowService.PaletteRegistry["HelloControl"] == null) _windowService.RegisterPalette("HelloControl", _helloControl, "Hello Window"); } } If you build this and copy it to your ‘Extensions’ folder in Blend (c:\program files\microsoft visual studio 11.0\blend\) and start Blend, you should see ‘Hello Window’ listed in the Window menu: That as they say is it!

    Read the article

  • November 2012 Chicago IT Architects Group Meeting Announcement

    - by Tim Murphy
    The year is quickly coming to an end.  This is the most exciting part of the year with technology manufacturers in overdrive trying to release as many products for Christmas as possible.  Our group is trying to do our part to bring order to the madness with one last presentation for the year.  Norman Murrin will be speaking on November 20th on Adopting Agile Processes in the Enterprise.  Be sure to join us by registering at the link below. Register del.icio.us Tags: Chicago Information Technology Architects Group,CITAG,Agile,Architecture

    Read the article

  • Logging the client IP with Nginx/Varnish/Apache

    - by jetboy
    I have Nginx listening on port 443 as an SSL terminator, and proxying unencrypted traffic to Varnish on the same server. Varnish 3 is handling this traffic, and traffic coming in directly on port 80. All traffic is passed, unencrypted, to Apache instances on other servers in the cluster. The Apache instances use mod_rpaf to replace the logged client IP with the contents of the X-Forwarded-For header. My problem is that if the traffic is coming via Nginx, while the 'correct' client IP is getting logged in the VarnishNCSA logs, it looks as if Varnish is (understandably) replacing Nginx's X-Forwarded-For header with 127.0.0.1 downstream, and this is getting logged with Apache. Is there a nice simple way to stop Varnish rewriting X-Forwarded-For if it's already populated?

    Read the article

  • Apply Group Policy to Remote Desktop Services users but not when they log on to their local system

    - by Kevin Murray
    Running Windows Server 2008 Service Pack 2 with Remote Desktop Services role. I want to hide the servers drives using a GPO, but not the users local drives when they are logged on to their local system. Using a GPO, I went to "User Configuration - Policies - Administrative Template - Windows Components - Windows Explorer" and enabled "Hide these specified drives in My Computer" and "Prevent access to drives from My Computer" and in both used "Restrict all drives". Then under "Security Filtering" for the GPO, I restricted it to the system running Remote Desktop Services and the specific users who will be using RDS. I then applied the GPO to our domain and it worked a little too well. Not only was I successful in getting the GPO to work for RDS users, but it also affected those same users at their local systems as well. I've tried everything I can think of, but can't figure out how to apply this just to the RDS but not at their local system. What am I missing?

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • cPanel webmail roundcube direct login

    - by Jinx
    I have a small web hosting that I offer along my web development/design services. I run cent os with cPanel on it and I'd like to have my clients use roundcube by default without ever seeing cpanels page for picking webmail app (squirrel/horde/roundcube). I manage everything on the server so they don't need access to additional features that can be found on that page. So basically, how can I make www.somepage.com/webmail go directly to roundcube for every single account on my server. I know how to do it manually for each account, but that's a pain to do. Thanks!

    Read the article

  • Using 64 bit wuauclt from 32 bit command prompt

    - by Tim Brigham
    I have a script that for legacy reasons needs to run inside a 32 bit command shell. This script also includes references to certain core windows binaries - most notably wuauclt but others as well - which are not accessible by default within the 32 bit environment. This script is being run in several locations including many windows 7 and server 2008 r2 boxes. I'm aware of the possibility to copy files from the system32 to syswow64 in order to get around this. Is there any better method - something along the lines of adding an entry to the path variable - which will allow me to fall back to these 64 bit binaries from within a 32 bit script?

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • Transferring DHCP using Windows Server Migration Tool - Why is Powershell is crashing on the import of the .mig file?

    - by Mike
    I am migrating DHCP from a windows server 2003R2 DC to a Windows Server 2008R2 DC I've followed this video and its predecessor (Installing Windows Server Migration Tools) http://technet.microsoft.com/en-us/video/migrating-dhcp-using-the-windows-server-2008-r2-migration-tools.aspx I went through everything smoothly until the last step. I have exported a .mig file with my DHCP configuration on the old 2003r2 server. I transferred this .mig file over to my 2008R2 server, when running the import command, it will appear to work for a minute or two and then I get a generic windows "Powershell has stopped working" error and I have to close the program. Under the problem details I see the following: FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.AccessViolationException OutermostExceptionType: System.AccessViolationException DeepestPowerShellFrame: unknown OS Version: 6.1.7600.2.0.0.272.7 LocaleID: 1033 Seems like there are permissions issues maybe? I am running powershell as an admin and am logged in to the server as a domain administrator. Any Ideas? Thanks

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

  • Invalid Parameter on node puppet

    - by chandank
    I am getting an error of err: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter port at /etc/puppet/manifests/nodes/node.pp:652 on node test-puppet My puppet class: (The Line 652 at node.pp) node 'test-puppet' { class { 'syslog_ng': host => "newhost", ip => "192.168.1.10", port => "1999", logfile => "/var/log/test.log", } } On the module side class syslog_ng::config ( $host , $ip , $port, $logfile){ file {'/etc/syslog-ng/syslog-ng.conf': ensure => present, owner => 'root', group => 'root', content => template('syslog-ng/syslog-ng.conf.erb'), notify => Service['syslog-ng'], require => Class['syslog_ng::install'], } file {"/etc/syslog-ng/conf/${host}.conf": ensure => present, owner => 'root', group => 'root', notify => Service['syslog-ng'], content => template("syslog-ng/${host}.conf.erb"), require => Class['syslog_ng::install'], } } I think I am doing it as per the puppet documentation.

    Read the article

  • mkfs.xfs: libxfs_device_zero write failed: Input/output error

    - by Crazy_Bash
    I can't find a way to create a filesystem on one of my disks. first i'm geting the following output: [root@~]# mkfs.xfs /dev/sdb1 mkfs.xfs: /dev/sdb1 appears to contain a partition table (dos). mkfs.xfs: Use the -f option to force overwrite. after using -F flag: [root@~]# mkfs.xfs -f /dev/sdb1 meta-data=/dev/sdb1 isize=256 agcount=32, agsize=22892696 blks = sectsz=512 attr=2 data = bsize=4096 blocks=732566272, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 **mkfs.xfs: libxfs_device_zero write failed: Input/output error** /dev/sdb: Disk /dev/sdb: 3001GB 1 1049kB 3001GB 3001GB primary Linux: Centos 6.3 Linux 1 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux what i've tried so far: recreating partition with parted rm 1

    Read the article

  • Apache is reponding a blank white page

    - by Bruno Araujo
    I have the following situation: A site hosted in apache 2.4, with ssl, that works like a charm for a while now, but out of no where, without modifications to the site, apache started serving random blank pages. The workaround this is to delete the cookies of the browser or restart the browser. I've switched the vitualhost to log in debug mode but it didn't got me anywhere. Here is the debug log of a failed page load: [Wed Oct 24 10:57:35.762547 2012] [ssl:info] [pid 27854:tid 140617706374912] [client 192.168.10.150:58917] AH01964: Connection to child 147 established (server xxx.com.br:443) [Wed Oct 24 10:57:35.762739 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1966): [client 192.168.10.150:58917] AH02043: SSL virtual host for servername xxx.com.br found [Wed Oct 24 10:57:35.777479 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1899): [client 192.168.10.150:58917] AH02041: Protocol: TLSv1, Cipher: DHE-RSA-AES256-SHA (256/256 bits) [Wed Oct 24 10:57:35.779912 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(243): [client 192.168.10.150:58917] AH02034: Initial (No.1) HTTPS request received for child 147 (server xxx.com.br:443) [Wed Oct 24 10:57:35.780044 2012] [authz_core:debug] [pid 27854:tid 140617706374912] mod_authz_core.c(809): [client 192.168.10.150:58917] AH01628: authorization result: granted (no directives) [Wed Oct 24 10:57:40.783950 2012] [ssl:info] [pid 27854:tid 140617706374912] (70007)The timeout specified has expired: [client 192.168.10.150:58917] AH01991: SSL input filter read failed. [Wed Oct 24 10:57:40.784077 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_io.c(988): [remote 192.168.10.150:58917] AH02001: Connection closed to child 147 with standard shutdown (server xxx.com.br:443)

    Read the article

  • Network latency and speed of light

    - by James
    This was kinda of covered by the following Is minimum latency fixed by the speed of light? , but i would like to add the follow up a bit. The scenario is as follows; we have two opposing sites one on the West Coast of the US and one in Ireland. The customer is in central Europe, and has requested a latency test. Ireland gives responses of ~65-70ms. However the West Coast guys claim to be faster with a response of 60ms. Now a quick check says that light in fiber would take about 42ms to make the trip to the States and 8.5ms to Ireland. So obviously this is a single hop and does not include routers, switches, firewalls, protocol overhead etc. Would I be right to call BS on their figures? As a final note I tested a ping to Google IP address that was allegedly on the west coast from a site that covered a similar distance and was amazed to get a response time of 20ms. Suggesting ICMP packets that travel twice the speed of light. So A) what am I missing B) Am I right to suspect shenanigans? UPDATE: Guys thanks so far for your help and I have been reading various previous questions on this. About 5 years I had an issue where the hop from the UK to Ireland added 10ms of latency no matter what we did. In the end I moved the servers; So imagine my surprise when I have guys that claim they are 5ms faster with a transatlantic trip. So again should I call BS? Oh and assume both sites are normal mortals that don't have access to Google magical routing, warp dives or flux capacitors. :)

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • Can I lvreduce after lvextend without losing the ext4 partition inside it?

    - by DrSAR
    In a botched attempt to move my root partition from one disk to another I have done the following: added new disk partitioned it with parted (part #3 is now almost totally filling the disk) initialized a physical volume $ pvcreate /dev/sdb3 Physical volume "/dev/sdb3" successfully created extended the volume group to include this new physical disk $ vgextend myvg /dev/sdb3 Volume group "myvg" successfully extended extended the logical volume (I think this is where I ballsed it up: I think I should have pvmove'ed stuff to the new pv in that group - can someone confirm?) $ lvextend /dev/mapper/myvg-root /dev/sdb3 I would now like to undo the lvextend and then proceed with the original plan of moving the content of the old physical volume over to the new physical volume. Can I reduce the logical volume (I have not yet touched the ext4 partition that sits in /dev/mapper/myvg-root with something like resizefs) without fear of damaging the ext4 filesystem? If so, how do I tell it to reduce by exactly the right amount? $ lvreduce --by-exactly-the-amount-occupied-by-PV /ev/sdb3 /dev/mapper/myvg-root

    Read the article

  • openvpn in a bridge?

    - by sebelk
    I have a somewhat tricky proble to solve. We have a wireless link between 2 building. One of them has an mikrotik and below there are some vlans. Some machines of one vlan need to use openvpn to connect to a remote private lan. I put a TP-Link WR1043ND (which those machines connect to) with openwrt with ebtables just in case I need it. I've configured openwrt in such a way that all ports belongs to the same vlan. My idea was to make things as transparent as I can. It has a bridge as follows: usr/sbin/brctl-full show br-lan bridge name bridge id STP enabled interfaces br-lan 8000.f8d111565716 no eth0.1 eth0.2 Also I've added an ebtables rule: ebtables -t broute -A BROUTING -p ipv4 -j DROP So "bridge" has only one IP address. I've installed openvpn and I'm trying to bring up the tunnel but I can't still get working. Sure, someone can says why don't you use the vpn on the mikrotik, there are some reasons, the first one is I have little experience with mikrotik and I'd want to have the vpn at hand :) The problem is that openvpn is not working, because it is complaining that I have only one Ip Address on the server side. So I set up and alias interface with another IP address but is not working either: : Rejected connection attempt from IP-Client-Side:37801 due to --remote setting Is there a way to make it work?

    Read the article

  • maintaining redirects in nginx from an external source

    - by Sascha
    I am in the situation to give our marketing department the opportunity to maintain their redirects by their own. Until now, they passed the information to the IT department and we maintained it for them in nginx.conf. Some of these guys are quite familiar with redirections in IIS or even in apache but it is no option to give them direct access to the nginx configuration. I see, that there is no nginx support for .htaccess files which I could give access to and I would also prefer not to grant write access to an conf-file that nginx includes. I expect, that our marketing will break our nginx setup within hours... Is there a secure possibility without giving them access our the heart of our load balancer?

    Read the article

  • get a list of running ec2 instances programmatically

    - by user113981
    Hi i have started with aws and found out that we can get a list of running servers with the aws php sdk. Is there any other way to get the list of all ec2 instances? after getting the list i want to sync the data from one main instances to all the instances. Something like a button click can also do the operation. Are rsync, incron the only options, or it can be done by aws php sdk also. Please provide some tutorial links.

    Read the article

  • Mongo Client RedHat EL5 UT8 Support

    - by Michael Irey
    # mongo MongoDB shell version: 1.6.4 Fri Mar 16 11:55:46 *** warning: spider monkey build without utf8 support. consider rebuilding with utf8 support connecting to: test Mongo Server seems to handle the utf8 characters fine, as well as my php-mongo-client driver. But when I try to query a record that has a utf8 character from the mongo command line client I get: > db.Users.find({age:33}); error:non ascii character detected Fri Mar 16 11:55:43 mongo got signal 11 (Segmentation fault), stack trace: Fri Mar 16 11:55:43 0x440b50 0x3664c302d0 0x3f47e7b6e0 0x3f47e83bbd 0x3f47e254f3 0x3f47e25660 0x3f47e256ee 0x3f47e25792 0x3f47e2876e 0x4b031d 0x443b72 0x445476 0x3664c1d994 0x43fd39 mongo(_Z12quitAbruptlyi+0x3b0) [0x440b50] /lib64/libc.so.6 [0x3664c302d0] /usr/lib64/libjs.so.1 [0x3f47e7b6e0] /usr/lib64/libjs.so.1(js_CompileTokenStream+0x3d) [0x3f47e83bbd] /usr/lib64/libjs.so.1 [0x3f47e254f3] /usr/lib64/libjs.so.1(JS_CompileUCScriptForPrincipals+0x60) [0x3f47e25660] /usr/lib64/libjs.so.1(JS_EvaluateUCScriptForPrincipals+0x3e) [0x3f47e256ee] /usr/lib64/libjs.so.1(JS_EvaluateUCScript+0x22) [0x3f47e25792] /usr/lib64/libjs.so.1(JS_EvaluateScript+0x6e) [0x3f47e2876e] mongo(_ZN5mongo7SMScope4execERKSsS2_bbbi+0xed) [0x4b031d] mongo(_Z5_mainiPPc+0x14a2) [0x443b72] mongo(main+0x26) [0x445476] /lib64/libc.so.6(__libc_start_main+0xf4) [0x3664c1d994] mongo(__gxx_personality_v0+0x269) [0x43fd39] Any ideas or suggestions would be welcome

    Read the article

  • Exchange 2010 Room Mailbox Calendar Permissions

    - by Brian Mitchell
    Exchange 2010 sp2 Outlook 2007/2010 Server 2008 I have managed to set up several room mailboxes in exchange, people are able to book the rooms and they get a response from the exchange server. this is brilliant. however users are unable to view the calendar of the room mailbox to see what times are available, in a ideal world I would like users to only see if the room is free or not. I dont want users to see the details of the meeting (title, description etc) I have been trying to do this using the following command Add-MailboxFolderPermission -Identity meetingroom -User "Usergroup" -AccessRights AvailabilityOnly -DomainController AD-Server This throws the following error Specified argument was out of the range of valid values. Parameter name: memberRights + CategoryInfo : NotSpecified: (meetingroom:MailboxFolderIdParameter) [Add-MailboxFolderPermission], Argum entOutOfRangeException + FullyQualifiedErrorId : CBC6516F,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Any help on the situation would be brilliant, i have been trying to get this done for a couple of days and im going around in circles.

    Read the article

  • Hostname Problem On WHM / cPanel Installation

    - by Eray
    My CentOS 5.6 server's hostname was "centos" . And then i change it to my domain : hostname domain.com And i started to installing WHM / cPanel as explained in here : http://etwiki.cpanel.net/twiki/bin/view/AllDocumentation/InstallationGuide/InstallingCpanel It's installed very well. And the i reboot my server. After rebooting, i was execute this command for open WHM's 2087 port : iptables -I RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2087 -j ACCEPT Now i'm trying to browse domain.com:2087 i'm getting Server (centos) not found .I noticed it's forwarding to my old hostname (centos) . And then execute this command to verify me hostname hostname it's returned "centos" again. I'm not sure, why it's returned to old hostname. (I think it returned to old hostname after rebooting) . I'm changed it one more time : hostname domain.com Finally, now my hostname is domain.com . BUt still i'm getting centos server not found error. This is result of iptables -L command. P.S. : domain.com/cpanel is working

    Read the article

  • Ubuntu cannot resolve unmet dependency

    - by DisgruntledGoat
    I'm trying to install a package on my Ubuntu 8.10 server. However, I get this message: The following packages have unmet dependencies. webmin: Depends: apt-show-versions but it is not going to be installed E: Unmet dependencies. Try ‘apt-get -f install’ with no packages (or specify a solution). So I run apt-get -f install which offers to install apt-show-versions and libapt-pkg-perl. After selecting to install without verification, I get these errors: Err http://gb.archive.ubuntu.com intrepid/universe libapt-pkg-perl 0.1.22build1 404 Not Found Err http://gb.archive.ubuntu.com intrepid/universe apt-show-versions 0.13 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/liba/libapt-pkg-perl/libapt-pkg-perl_0.1.22build1_i386.deb 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/a/apt-show-versions/apt-show-versions_0.13_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? I've tried running apt-get update and adding --fix-missing as suggested, but neither works. Where do I go from here?

    Read the article

  • SVN + Active Directory

    - by rudigrobler
    How do I setup SVN (On a linux box - Centos 5.2) to authenticate using Active Directory? Also: Any tips or tricks? What should I watch out for? How fine grain can I set the access? This group have access to these projects, etc? And how does this work if I use something like tortoissvn to access my repository? What I have learned so far: you need the following modules installed for apache mod_ldap mod_authnz_ldap mod_dav mod_dav_svn mod_authz_svn?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17  | Next Page >