Search Results

Search found 19212 results on 769 pages for 'side projects'.

Page 604/769 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • Outlook 2007 + Exchange 2010 (Save All Attachments)

    - by RobertPitt
    About 3 weeks back our company upgraded our mail system to Exchange 2010, all went smooth, few issues but nothing major. A few days ago we had a call from a colleague where he was unable to save all attachments, From File > Save As > Save All Attachments. When the email has a single attachment it works perfectly normal, and depending on the file type it allows you to save multiple attachments. But there's a lot of file types that will not work, such as zip, pdf, doc etc, Usually we get a location box open up asking where we would like to drop the attachments, but it does nothing, You click Save All Attachments and nothing happens. After hours of research I have come across mixed results, a lot of people on forums have been explaining that they have recently crossed over to Exchange 2010 and there issues started there. But on the other hand Microsoft released a KB (278188) which was depressing if that, but that article was published in 2007, as stated by the time stamp, and Exchange 2010 has only come out recently. Im looking to see if you guys have any clues what could be causing this, anything server side that I can take a look at (AD, Exchange, ...). Any help on this is greatly supported

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Ping with explicit next-hop selection (aka Monitoring multiple default gateways)

    - by Michuelnik
    I have a linux (debian) router with two internet connections (A) and (B). (A) is preferred, (B) is fallback. I want to monitor the internet connection (and not only the availability of the gateways!) and change the default route appropriately. If (A) is not providing internet, switch to (B) If (A) is providing internet again, switch back to (A). Only problem I have is in case (2). My routing table points towards a working internet so I cannot easily detect whether internet is working over link (A) again. I am search for a ping or traceroute (or other diagnosis-tool) which can select the next-hop explicitly. ping -r looks promising, but can only ping a host on the lan. (It only has to write another destination address in the packet, damnit!) traceroute -g gateway looks even more promising and nearly does what I want - but sets source routing options which my next-hops deny. (Not within my administrative boundary...) I just want a $ping, that can: select a source interface (and address) select a next-hop on that interface ping any arbitrary ip address I could do evil trickery with policy-based routing but that would have production impact for all users. I would like to see a side-effect-free solution....

    Read the article

  • Plesk FTP not working but SFTP and Shell is working

    - by shamittomar
    I am facing a strange problem. The FTP on my Plesk VPS is not working. Whenever I try to connect, FileZilla FTP client says: Status: Resolving address of xxxxxxxxxxxxx.com Status: Connecting to xxx.xxx.xxx.xxx:21... Status: Connection established, waiting for welcome message... Error: Could not connect to server So, it's not even going to the step of asking username/password. So, it's something else. The SFTP on port 22 is working fine. Also, I can successfully do shell access and run commands. But, I NEED FTP access too on port 21. I have searched everywhere but can not find any setting to enable it. This is the Plesk version info: Parallels Plesk Panel version 9.5.2 Operating system Linux 2.6.26.8-57.fc8 CPU GenuineIntel, Intel(R) Pentium(R) 4 CPU 3.00GHz Any help is appreciated. [EDIT]: The firewall is not blocking it. I have checked it on server and there are absolutely no blocking rule. Firewall states: All incoming/outgoing connections are accepted on FTP And on client-side (my PC), I can connect to other FTP servers so this is not an issue in my PC's firewall. Moreover, I can not even connect to the FTP from online FTP clients like net2ftp.

    Read the article

  • vSphere Promiscuous mode only receiving packets one way from network switch

    - by steve.lippert
    We have two network switches, a POE switch (SwitchA) to power our phones / users computers and a non-POE switch (SwitchB for the rest network.) Each switch is setup to do port mirroring to support our VoIP recording system. SwitchA does port mirroring on specific ports if we need to record a user. SwitchB mirrors one port to monitor our work at home users (Internet comes in from managed router, to switch, back out to our firewall.) These two port mirroring setups feed into one vmware vSphere 4.1 server, it has four total physical cards. The other two NICs feed into an unmanaged switch for connecting to the rest of the network. Once into the vSphere server all network ports go into a vSwitch, and then one of the servers (Windows 2008 R2) sniffs them out and does its thing. Everything is working fine and dandy from SwitchB. But on SwitchA we only receive one side of the VoIP packets (going out to the phone, nothing coming in from the phone). Troubleshooting steps I have taken so far: I hooked up my laptop to the monitor port on SwitchB and I see both sides of the packets. I swapped which network interface is plugged into the monitor port on SwitchA. Because everything feeds into one vSwitch / vNetwork and both sides of the conversation arrive just fine from SwitchB I believe everything is configured correctly on the vSphere server/guest. What could be causing one way packets to arrive on my guest machine from only one interface, but not the other? Could a bad cable be causing the problems from SwitchB?

    Read the article

  • What Windows service binds a NIC to the network?

    - by Bigbio2002
    I have a server that takes several minutes for the NIC to bind itself to the network upon startup (it has a statically-configured IP). This causes DNS/WINS/Intersite Messaging to fail to start, since they're dependent on a network connection. While I'm still attempting to find a root cause to this issue (I've done firmware updates, checked for any odd drivers/services, no luck so far), but in the meantime, I want to adjust the load order of services to ensure that the NIC binds first before these services attempt to start. The only question is, which service is it? The server is running Server 2008 R2 and only has one NIC installed. (On a side note, there are two other small but odd problems occuring with the server. The server had the issue described in KB2298620, which I've fixed. The other problem occurs in Windows Server Backup. No events appear in the upper portion of the window, despite the fact that backups are running in the background. Whenever I attempt to modify the backup schedule, it gives me the error "Not enough storage is available to process this command" and appears to fail, when, in fact, it actually succeeds. These may be separate issues, but something tells me that some of these might share a common root cause.)

    Read the article

  • Recommended programming language for linux server management and web ui integration.

    - by Brendan Martens
    I am interested in making an in house web ui to ease some of the management tasks I face with administrating many servers; think Canonical's Landscape. This means doing things like, applying package updates simultaneously across servers, perhaps installing a custom .deb (I use ubuntu/debian.) Reviewing server logs, executing custom scripts, viewing status information for all my servers. I hope to be able to reuse existing command line tools instead of rewriting the exact same operations in a different language myself. I really want to develop something that allows me to continue managing on the ssh level but offers the power of a web interface for easily applying the same infrastructure wide changes. They should not be mutually exclusive. What are some recommended programming languages to use for doing this kind of development and tying it into a web ui? Why do you recommend the language(s) you do? I am not an experienced programmer, but view this as an opportunity to scratch some of my own itches as well as become a better programmer. I do not care specifically if one language is harder than another, but am more interested in picking the best tools for the job from the beginning. Feel free to recommend any existing projects except Landscape (not free,) Ebox (not entirely free, and more than I am looking for,) and webmin (I don't like it, feels clunky and does not integrate well with the "debian way" of maintaining a server, imo.) Thanks for any ideas!

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • Possible Solution for Setting up a Linux VPN Server to Encrypt WLAN Traffic of Macs and iPhones on

    - by GorillaPatch
    I would like to set up a VPN server on debian linux to encrypt wireless traffic coming from my Mac or iOS device. I would like to use a certificate-based solution. Setting up a PKI infrastructure and managing certificates is OK for me. 1. Which server to pick? By looking through the internet and here on stackoverflow I found the following possible solutions: strongSwan IPSec and racoon Which solution is feasible for a linode running debian squeeze? 2. How to configure the network? If I understood correctly a VPN has a virtual network interface as an endpoint on the server side. Naively I would think that I need a DHCP server running on the server to assign a dynamic private IP (like of the class C network 192.168.xxx.xxx) to the connecting clients. Next I think I would need to set up masquerading to NAT the incoming VPN traffic to the real interface directly connected to the internet. Is this the right way to go? Do you have any configuration examples? I often saw VPN configurations used to connect to your home network, but that is not what I am looking for. I have a server up in the internet and want to use it as a proxy to encrypt traffic in insecure network environments like public WLANs.

    Read the article

  • Set primary group of file or directory on Samba share from Windows

    - by Hubert Kario
    Short version: I have such situation on a Samba share: $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka Domain Users 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka Domain Users 0 Jan 11 17:07 file A How am I able to change this to following using only Windows SMB/CIFS client (using 3rd party applications is OK) $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka ntpoweruser 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka ntpoweruser 0 Jan 11 17:07 file A Rationale and background info I'm using POSIX ACLs on Samba shares. Together with acl group control for Samba, it allows me to delegate management of permissions to different users based on group membership. Thing is, when I create a new file on a Samba share, I'm unable to set its primary group (the one that grants permission to change its permissions). It's being set to my primary group (Domain Users) or group set using force group option in smb.conf share definition. Removing all groups in windows except the one I want to become the new primary group doesn't work. I can change it using chgrp group folder/ as regular user though shell, but it's suboptimal (not all users are *nix users). Trying to set new owner to group from Windows file permission window makes the Samba to return permission denied with following log entry: [2012/01/05 21:13:03.349734, 3] smbd/nttrans.c:1899(call_nt_transact_set_security_desc) call_nt_transact_set_security_desc: file = projects/project A/New folder, sent 0x1 [2012/01/05 21:13:03.349774, 3] smbd/posix_acls.c:1208(unpack_nt_owners) unpack_nt_owners: unable to validate owner sid for S-1-5-21-4526631811-884521863-452487935-11025 [2012/01/05 21:13:03.349804, 3] smbd/error.c:80(error_packet_set) error packet at smbd/nttrans.c(1909) cmd=160 (SMBnttrans) NT_STATUS_INVALID_OWNER The SID is correct and belongs to group I specified in GUI.

    Read the article

  • OARC's DNSSEC validating resolvers validate all my records but A records

    - by demize
    I have DNS set up with powerdns. It serves my DNS pretty well, and it AXFRs to other slaves. The slaves haven't yet updated to the most recent records, but that doesn't affect the validation, it would appear. Any record I can think of (AAAA, MX, TXT, even the CNAME for www) validates -- except for A records: dig @149.20.64.20 +dnssec www.demize95.com CNAME returns ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 7 while dig @149.20.64.20 +dnssec demize95.com A returns ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 7. The same happens with any other A record I have. I set up DNSSEC with pdnssec, and it does work for all the other records, but it's never validated for my A records. What's the problem here? Also, a side-note: I have to use ISC's DLV to create the chain of trust, since my domain registrar doesn't yet support sending the DS records to the com zone.

    Read the article

  • samba access from win98

    - by SimonSalman
    Hello, the admin installed a new file server in our institute: OpenSuse 11.1 with Samba 3.2.7-11.3.2-2154-SUSE-CODE11. They copied the smb.conf from the old machine (hosting Samba 3.0.0) to the new one. Everything works as before, but one Windows 98 machine can see but not access the file server. It prompts for user authentication, but will not accept any user-password combination. There exists a lot of discussion about the problem on the net, but none provided a clear answer to the problem. EDIT: 1. I changed Win98 registry enable plain-text passwords, and alternatively changed server's smb.conf and /etc/smbpasswd to accept encrypted passwords 2. Further I provide a profile with a user-password combination on Win98 machine similar to one of the samba users-password combinations. 3. I changed smb.conf such that the samba server is the Local Master Browser all these changes are not necessary when using the older samba server. So, I conclude that a configuration problem on the server side is likely. If you need any further information, I will post them here. Best regards, Simon

    Read the article

  • My home router randomly disconnects me and I'm unable to reconnect to it

    - by Roy Tang
    It's happened a few times, I'm not sure how to diagnose/debug, so any advise would be appreciated. Symptons: sometimes the router will randomly disconnect; the connection icon on my desktop (wired to the router) gets that yellow "!" symbol that tells me my connection just went down. At this point I'm unable to ping the router. afterwards I try to reset the router by removing then reconnecting the power jack on the router side (this is the fastest way as I can't reset the power strip it's connected to without rebooting my desktop. the router has a reset thingy, but it's one of those things where i have to find a pin to stick into the hole, and when I get disconnected I usually need to get reconnected immediately so I just pull and put back the power jack), but even after that the connection has the same state. after the router reboots, if I try to connect to it using a wifi device like my ipad, the ipad prompts me for the wifi password even though it had already "remembered" all the settings for this router forever after i finally decide to reboot the power strip, and my desktop and the router boot up again, the connection returns to its normal state somewhat and i'm able to connect to it as normal using the desktop and wifi devices. What do I need to check the next time this happens so I can figure out the problem? Is it possibly because we've been using the power jack on the router as an easier way to reboot it? Should I be shopping around for a new router? If it helps, the router is a DLink DIR-300

    Read the article

  • Windows based development environment: HyperV, VMWare, or VirtualBox on development machine?

    - by bleepzter
    I am a software engineer with a little bit of an informal "support" functionality... I am trying to figure out what is the best possible approach to employing virtualization technologies into our development process. Since the code we develop is server-centric, testing it often requires a VM with specific software requirements. I used to use VM Ware player (free version) to run my VM's until both of my laptops started exhibiting issues with corrupted windows 7 services and dying hard drives. All leads pointed to VMWare, which by the way seems to be a solid product if you pay for the Workstation edition ($300). On a side note, I have always been a fan of the Windows Server product line. I think it makes for one of the best development environments out there - it is highly scalable, highly reliable, and very efficient. So to be fair I replaced the drives of the laptops and installed Windows Server 2008R2, VS2010 Ultimate SP1, SQL Server 2008R2, TFS Server 2010 and all other tools and API's needed do do my work properly. So now I am stuck with a bunch of VMWare VMs. I don't want to repeat of what happened before, and I certainly don't want to bog down my machine with an inefficient hypervisor or services that are not needed. Futhermore the VMDK hard-disk format used by VMWare is not compatible with the VHD format of Hyper V. It is my understanding that converting from one format to the other can only happen by Microsoft System Center Virtual Machine which I have downloaded from MSDN and ready to install. I guess the question at this point is: Does SCVM run as another service in Windows? Is it a memory hog? What is a better virtualization technology - Hyper-V or Virtual Box in terms of efficiency ease of use and most importantly - memory footprint? (Keep in mind the development environment already has a ton of services running such as TFS Server, SQL Server, IIS, etc...) How would you advise to proceed at this point so that the VMs are still used in the test process? Thanks Martin

    Read the article

  • 1080p monitor: connected through VGA - perfect, HDMI - awful

    - by develroot
    When I connect my 23" monitor (1920x1080) to my pc through HDMI, I encourage some problems. It's not full screen That's it. There are ~1cm black borders on right and left side and ~0,5cm black borders on the top and the bottom of the monitor. That's pretty frustrating. I tried adjusting overscan, but I can't mannualy type the % of overscan that I need. I can only select between ex. 8 and 10% in AMD Vision Engine Control center, but what I need is 9%. Next, even if i select 10% and the image fits all the corners, but after log off all the settings are lost and I have to do it again and again. Text, images, everything looks blurry That surprised me a lot. Should'nt HDMI quality be better than VGA's one? When connected through HDMI, the text isn't readable. It's like a very low refresh rate, although i'm running at 60Hz. Also the text has something like little shadows, very very annoying. Are there any tips to get the same quality as with VGA, with HDMI ? (running on integrated ATI Radeon HD4200, which, appearently, is the best card I have ever seen in terms of integrated ones)

    Read the article

  • SSD for swap on Ubuntu server

    - by grs
    Currently I am reading SSD reviews and I wonder how much exactly I will benefit if I move the 24 GB swap from 7200rpm HDD to SSD. Does anyone implemented swap space on SSD? Is this generally good idea? On a side note: I read that ext4 has much better performance if the journal is on SSD. Anyone with such a setup? Thanks! Edit: Here I will answer the questions posted: Occasionally, relatively rare I am hitting the swap. I know what the swap is for and that is better to get more RAM. When the server begins to swap its performance degrades (not a surprise). The idea is if I have few memory hungry processes running, to improve the overall system performance at that time, using SSD for swap, instead of slower rotational media. At the end - I want to be able to login faster and check the server state during swapping, instead of waiting on the login prompt. And of what I see SSD is cheaper per GB than RAM. Would I have better server performance during swapping (as rare it is) using SSD compared to HDD? Where 10k or 15k rpm HDDs would rate in this scenario? Thank you all for your quick and prompt answers!

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • GPO startup script not copying files

    - by marcwenger
    I created a GPO startup script to execute for computers in a specific AD container. The script takes a file from the AD netlogon share and places it on a directory on the computer. Given the right permissions (ie: myself) can execute the script just fine and the file copies. But it doesn't work on startup - the file does not copy over from the AD server. The startup script should run as localsystem (am I right?). So the question is why do the files not copy on startup? Could it be because of: Is it permissions of the local system user? Reading the registry is problematic on startup? Obtaining files from the AD netlogon folder is problematic on startup? Am I missing it completely? My test machine does have the registry key and local directories as described in the script. I myself have standard user permissions on the test machine. AD server is Windows 2008, test client is Windows XP SP3 (and soon to be Windows 7, which I assume permissions issues will be inevitable) Dim wShell, fso, oraHome, tnsHome, key, srcDir Set wShell = WScript.CreateObject("WScript.Shell") Set fso = CreateObject("Scripting.FileSystemObject") key = "HKLM\Software\Oracle\Oracle_Home" On Error Resume Next orahome = wShell.RegRead(key) If err.Number = 0 Then tnsHome = oraHome + "\" + "network\admin\" srcDir = wShell.ExpandEnvironmentStrings("%logonserver%") + "\netlogon\UpdatedFiles\" fso.CopyFile srcDir + "file1.ext", tnsHome, true End If Side note: To ensure that the script is properly deployed, I purposely put some errors in the script, and on the next startup the error message appeared. So I know the GPO is deployed properly.

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • Which project management software for technophobes who've never worked with something like that?

    - by Ernst
    Hi, Our director has asked me to get something to manage projects. Note that so far we haven't used anything of the sort. I did not get very clear instructions yet, probably because she doesn't know exactly what we need either. My guess is, we'll only find out while using something. I've looked at some, openworkbench, ganttproject, and microsoft project. The latter has the advantage of easy importing of users from exchange, are there others that do that (even if not directly, easily)? I don't think it's a critical requirement though. We're using some other custom software where I have to add users manually anyway and we're small enough that it's maybe once a month that I have to add or remove a user. In any case, I'm not in favour of buying anything, since I'm skeptic about us actually succeeding in putting it to good use, and even if we do, we will only during usage discover what we need. We're also not a tech shop, most people vary from not very technically adept to technophobic. This means we need something very userfriendly. I prefer to stay away from online solutions, since we deal with sensitive information and I prefer to keep that in house, but I guess it would be acceptable for the trial period. An intranet site is an option though, but preferably something that is easy to set up with IIS. Xplanner plus and redmine I found too hard to set up for this experiment. Some other options I haven't yet tried to install, but which look to complex for our technophobes: Endeavour Software Project Management, Project-Open, Trac. Any suggestions? Thanks, Ernst

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

  • How to remove Media Center from Windows 8

    - by Arabella
    I bought 2 Windows 8 upgrades, 1 for my PC and 1 for my notebook. I added Windows Media Center to my notebook using the free offer in November (side note: the key was emailed to me within 5 minutes, I see many people have been complaining that it takes a few days). Today I decided to add WMC to my PC as well, so I went onto the Microsoft website, same like last time, and I received the email within a few minutes. Once I added WMC, entered the key and the computer rebooted, my activation is now broken: This product key is already being used on another PC. Try a different key or buy a new one. After rereading the product key email, I realised that the WMC key was exactly the same as the one I had received in November for my notebook (I used the same email, i.e. my Microsoft account Outlook email, for both). I didn't think this would be a problem, as on Microsoft's feature pack page it states: ...is limited to five licenses per customer per promotion. So then I decided, I'll just remove WMC from my PC and go back to Windows 8 Pro. So I turned off the WMC feature, PC restarted, activation still broken because my key has been replaced. I then tried to activate it with my original Pro key. The error it gave was that this key cannot be used with this version of Windows, as it is now Windows 8 Pro with Media Center and not Windows 8 Pro anymore. I've searched a bit and it seems the only way to remove it is do a clean install. I tried the Windows 8 Downgrade Helper, which told me I was already running Win 8 Pro when I tried to downgrade, and that I was running Win 8 Pro with Media center when I tried the other option. To sum up: How do I remove Windows Media Center from Windows 8 Pro without having to do a clean install?

    Read the article

  • Active Directory: how to be SURE users can change their own passwords?

    - by Latro
    Working on some project where a tool we have has to authenticate against AD connecting via LDAPS and perform password changes if required or requested. IN THEORY, the tool does that, and we have seen it work in other projects. IN PRACTICE, against this particular directory, it fails. Been driving me crazy. The particulars of the situation: Windows 2003 AD Defined a "technical user" for the LDAP connection with rights to change users passwords When password change is required - in this case, because pwdLastSet is 0 - the tool uses the technical account to go, bind to the controller and change the user password. If password change is not required but the user request it, then the bind is done with the user account. That last condition is the one that doesnt work. With the technical user the password change is possible, but with the user itself, it isnt. We get an error like this: LDAP access failed: javax.naming.directory.InvalidAttributeValueException: [LDAP: error code 19 - 0000052D: AtrErr: DSID-03190F00, #1: 0: 0000052D: DSID-03190F00, problem 1005 (CONSTRAINT_ATT_TYPE), data 0, Att 9005a (unicodePwd) no idea what DSID-03190F00 means cause it doesnt seem to be anywhere in google :-/ Been looking at several MS documentation pages and frankly, I'm not understanding one bit of it. There is some "control access right" called User-Change-Password that may, or may not, control what objects have the right to change their own password, which may, or may not, have to do with ACE and ACLs... There is GPO. There is maybe the password policy but it is only set to ask for passwords of 6 chars or more... Can anybody explain to me in easy-to-check steps how can I go and tell the AD admin guy (who is as lost as me) what to do to ensure that users in the AD directory (objectClass top,person,organizationalPerson and user) are able to change their own passwords by themselves? Thanks in advance

    Read the article

  • Exceptional slowdown of robocopy copying from VM to DFS array

    - by user1588867
    I've got an old win 2003 VM (VMware) on a blade cluster of VMs that I'm moving a considerable amount of files to our new DFS array. There are two main folders with about 1.7 million and half a million smaller files (letters, memos, and other smaller files) respectively. Total size is ~420 GB and ~100 GB. We're using the gui version of robocopy on the server to copy the files. We had initiated a file copy about a month ago to test the process and found that it was taking around 4 hours for the large file. Now that I'm in the process of actually switching the files over it has been taking 18-20 hours. Nothing has changed on the server side and nothing has changed on the settings of the copy (no logs, 1 retry with a wait of 1 second). Our intent is to shut off the share and force the copy over again to get all the files that have been left out of the copy due to being locked by users. I can't take a 20 hour outage to do that though. Does anyone have any theories about what could be causing such a delay for robocopy compared to previously shorter runs?

    Read the article

  • Weird rendering artefact in vim (terminal, not MacVim)

    - by Tobi Lehman
    Running Mac OS X, using either Terminal.app or iTerm2, there is a strange artefact with the character rendering that I have a hard time explaining and an even harder time understanding. I'll start with a video of my screen so that you can see and example of it in action: From the video you can see a few ways it is weird, for example, sometimes when I hit a letter in insert mode, the character is double printed. When I go into normal mode, the artefact remains. When I re-enter insert mode, hitting backspace copies the characters on the left to the position under the cursor. This has happened in OS X Lion, and Mountain Lion, under both Terminal.app and iTerm 2. This never happens under MacVim. Also, I use GNU/Linux on my other machine, and have never had this happen, I am pretty sure it is strictly a Mac OS X issue, but I do not know how to fix it. For a while, I've been working around it by using MacVim most of the time, but I prefer working in a terminal. Does anyone know what is happening here, and if so, how can I fix it? EDIT: I tried using the macvim Vim executable, and I still get strange artefacts, but they are localized to the left side of the screen, here is an example:

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >