Search Results

Search found 137626 results on 5506 pages for 'linked lists using c'.

Page 211/5506 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • SMTP authentication error using PHPMailer

    - by Javier
    I am using PHPMailer to send a basic form to an email address but I get the following error: SMTP Error: Could not authenticate. Message could not be sent. Mailer Error: SMTP Error: Could not authenticate. SMTP server error: VXNlcm5hbWU6 The weird thing is that if I try to send it again, IT WORKS! Every time I submit the form after that first error it works. But if I leave it for a few minutes and then try again I get the same error again. The username and password have to be right as sometimes it works fine. I even created the following (very basic) script just to test it and I got the same result <?php require("phpmailer/class.phpmailer.php"); $mail = new PHPMailer(); $mail->IsSMTP(); $mail->Host = "smtp.host.com"; $mail->SMTPAuth = true; $mail->Username = "[email protected]"; $mail->Password = "password"; $mail->From = "[email protected]"; $mail->FromName = "From Name"; $mail->AddAddress("[email protected]"); $mail->AddReplyTo("[email protected]"); $mail->IsHTML(true); $mail->Subject = "Here is the subject"; $mail->Body = "This is the HTML message body <b>in bold!</b>"; $mail->AltBody = "This is the body in plain text for non-HTML mail clients"; if(!$mail->Send()) { echo "Message could not be sent. <p>"; echo "Mailer Error: " . $mail->ErrorInfo; exit; } echo "Message has been sent"; ?> I don't think this is relevant, but I just changed my hosting to a Linux shared server. Any idea why this is happening? Thanks! ***UPDATED 02/06/2012 I've been doing some tests. The results: I tested the script in an IIS server and it worked fine. The error seems to happen only in the Linux server. Also, if I use the gmail mail server it works fine in both, IIS and Linux. Could it be a problem with the configuration of my Linux server??

    Read the article

  • Using an SSD with no AHCI [ICH7 base] - Windows 7 hangs frequently

    - by h4xnoodle
    I have a Shuttle Intel G31 + ICH7 (base -- not M/R etc) system. I just bought an OCZ Vertex 3 120gb [VTX3-25SAT3-120G] which includes the Sandforce 2218 firmware. The ICH7 does not support AHCI. I understand that this can be a problem. What I don't understand, is if it's necessary to have the proper performance of this drive. I know that without AHCI I may get a limited read/write speed -- this is fine. What my concern is, is the constant freezing/hangs I'm getting with Windows 7 on any disk activity. The 'Highest Active Time' flip-flops from 0 to 100% every minute or so regardless of large or small files. EDIT: The threads/processes with the highest response time is the kernel. I've been reading about other people with Shuttle SG31G2s, and they seem to be using SSDs no problem. Is this the controller's fault? The fact that I do not have AHCI enabled? It makes sense to me that if this SSD requires AHCI features that it would cause Windows to hang, but I would like to fully determine my situation before returning things/reformatting. To initially have my drive recognise the SSD at all, I had to change the BIOS option to Force Gen II instead of Auto for the SATA controller. I then installed Windows with no problem. There were no errors in the event log related to disk usage, but watching the perfmon I could see the highest active time and the processes (usually pagefile.sys being written to, or chrome/firefox caching) which was correlated to the hanging. So now what I need answered is: should I be returning this SSD and getting one with a different controller, or returning the SSD all-together as it will never work out and I will continue to get these hangs. Posts I've read: Windows 7 New SSD SATA AHCI? -- suggests to use AHCI http://forums.anandtech.com/showthread.php?t=2189868 -- Sandforce issues Windows 7 freezes with SSD -- and attached posts Why does my Windows 7 PC / SSD drive keep freezing? -- this is not the controller I have, but still a related issue. Windows 7 hangs after longer inactivity of user -- also tried messing with power settings with no luck. It was already set to 'Never' for turning off HDDs.

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • OpenVPN Clients using server's connection (with no default gateway)

    - by Branden Martin
    I wanted an OpenVPN server so that I could create a private VPN network for staff to connect to the server. However, not as planned, when clients connect to the VPN, it's using the VPN's internet connection (ex: when going to whatsmyip.com, it's that of the server and not the clients home connection). server.conf local <serverip> port 1194 proto udp dev tun ca ca.crt cert x.crt key x.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 9 client.conf client dev tun proto udp remote <srever> 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert x.crt key x.key ns-cert-type server comp-lzo verb 3 Server's route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 69.64.48.0 * 255.255.252.0 U 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 Server's IP Tables Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-proftpd tcp -- anywhere anywhere multiport dports ftp,ftp-data,ftps,ftps-data fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-proftpd (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere My goal is that clients can only talk to the server and other clients that are connected. Hope I made sense. Thanks for the help!

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Stack-based keyboard delay using Logitech MX3100 keyboard

    - by Mark S. Rasmussen
    I've been using a Logitech Cordless Desktop MX3100 keyboard for quite a while. I've never really had any problems, except for the occasional typo. I noticed however that I tended make the typo "Laod" instead of "Load", quite a bit more often than any other typos. As it started to get on my nerves, I decided to do some testing. What I found out was than when I write lowercase "load", I'd never make the typo. All uppercase, or just uppercase L, I'd make the typo quite often. My actual (very scientific) testing is probably best described by showing the output: moatmoatmoat MoatMoatMoat loatloatloat LaotLaotLaot loafloafloaf LaofLaofLaof hoathoathoat HoatHoatHoat hoadhoadhoad HoadHoadHoad lortlortlort LrotLrotLrot What i found out was that whenever shift was depressed, typing an uppercase "L" would induce a significant lag if the next character was an "o", compared to the lag of the any other key: High "o" lag: LoLoLoLoLoLo No "a" lag: LaLaLaLaLaLa No lag for neither "o" nor "a": lolololololo lalalalalala By realizing this I regained a slight bit of sanity as I knew I wasn't coming down with a case of Parkinsons. I was actually typing correctly, the lag just interpreted it wrongly. Now, what really bugs me is that I can't fathom how this is occurring. What I'm actually typing, in physical order, is this: L - o - a - d, and yet, the "a" is output before the "o", even though "o" was pressed before "a". So while the keyboard is processing the "Lo" combo, the "a" gets prioritized and is inserted before the "o" is done processing, resulting in Laod instead of Load. And this only happens when typing "Lo", not when typing lowercase "lo". This problem could stem from the keyboard hardware, the receiver hardware or the keyboard software driver. No matter the fault location however, I can't imagine how this could be implemented as anything but a FIFO queue. A general delay, sure, I could live with that, albeit I'd be irritated. But a lag affecting different keys differently, and even resulting in unpredictable outcome - that just doesn't make any sense. I've solved the problem by just switching to a wired keyboard. I just can't shake it off me though; what kind of bug/error/scenario would result in a case like this? Edit: It's been suggested that I stop drinking Red Bull and stick to water instead. While that may actually help solve the issue, I'm really not looking for a solution as such. I'm more interested in an explanation of how this could happen, as I can't imagine any viable technical solution that could result in this behavior.

    Read the article

  • Block IP Address including ICMP using UFW

    - by dr jimbob
    I prefer ufw to iptables for configuring my software firewall. After reading about this vulnerability also on askubuntu, I decided to block the fixed IP of the control server: 212.7.208.65. I don't think I'm vulnerable to this particular worm (and understand the IP could easily change), but wanted to answer this particular comment about how you would configure a firewall to block it. I planned on using: # sudo ufw deny to 212.7.208.65 # sudo ufw deny from 212.7.208.65 However as a test that the rules were working, I tried pinging after I setup the rules and saw that my default ufw settings let ICMP through even from an IP address set to REJECT or DENY. # ping 212.7.208.65 PING 212.7.208.65 (212.7.208.65) 56(84) bytes of data. 64 bytes from 212.7.208.65: icmp_seq=1 ttl=52 time=79.6 ms ^C --- 212.7.208.65 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 79.630/79.630/79.630/0.000 ms Now, I'm worried that my ICMP settings are too generous (conceivably this or a future worm could setup an ICMP tunnel to bypass my firewall rules). I believe this is the relevant part of my iptables rules is given below (and even though grep doesn't show it; the rules are associated with the chains shown): # sudo iptables -L -n | grep -E '(INPUT|user-input|before-input|icmp |212.7.208.65)' Chain INPUT (policy DROP) ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-input (1 references) ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 4 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 12 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ufw-user-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-input (1 references) DROP all -- 0.0.0.0/0 212.7.208.65 DROP all -- 212.7.208.65 0.0.0.0/0 How should I go about making it so ufw blocks ICMP when I specifically attempt to block an IP address? My /etc/ufw/before.rules has in part: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ACCEPT -A ufw-before-input -p icmp --icmp-type source-quench -j ACCEPT -A ufw-before-input -p icmp --icmp-type time-exceeded -j ACCEPT -A ufw-before-input -p icmp --icmp-type parameter-problem -j ACCEPT -A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT I'm tried changing ACCEPT above to ufw-user-input: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ufw-user-input -A ufw-before-input -p icmp --icmp-type source-quench -j ufw-user-input -A ufw-before-input -p icmp --icmp-type time-exceeded -j ufw-user-input -A ufw-before-input -p icmp --icmp-type parameter-problem -j ufw-user-input -A ufw-before-input -p icmp --icmp-type echo-request -j ufw-user-input But ufw wouldn't restart after that. I'm not sure why (still troubleshooting) and also not sure if this is sensible? Will there be any negative effects (besides forcing the software firewall to force ICMP through a few more rules)?

    Read the article

  • Apache + Tomcat error 120006 Using mod_proxy_ajp for Load Balance

    - by Wakaru44
    I have an apache 2 frontend with two nodes, and a backend with two instances of tomcat 6 balance with mod_proxy_ajp. The bbdd is in a separate machine. All machines use RHEL, 6.2 on the frontend, 5.5 on the backend. The infraestructure is virtualized using VMware. # This is the apache config in one of the virtualHost. ProxyPreserveHost On ProxyPass / balancer://liferay/ <Proxy balancer://liferay> BalancerMember ajp://lrab:8009 route=liferaya BalancerMember ajp://lrbb:8009 route=liferayb status=+H ProxySet lbmethod=byrequests nofailover=on </Proxy> The conector in tomcat is now configured like this: <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" enableLookups="false" allowTrace="true" /> Do you think it could be useful to set a maxThreads parameter, like in this post?? in that case, How can i determine a proper number of threads? From time to time, we get errors like this [Tue Sep 18 17:57:02 2012] [error] ajp_read_header: ajp_ilink_receive failed [Tue Sep 18 17:57:02 2012] [error] (120006)APR does not understand this error code: proxy: read response failed from 192.168.1.104:8009 (lrab) And apache switches to the pasive node (if its active) or fails with 503. Some things i have tried so far: I think that i have some performance issues with one of the applications, Here you can see a threadDump But i'm not quite sure about it. I also started to monitor the network connection. I have noticed that there are some pings lost when i have a "ping -f " so maybe it could be a network issue, but the success rate is 100% (so the lost packets are only a few among the flood, but maybe, i don't know, enough to break the link betwen apache and tomcat). I wrote a python script to check connectivity with timestamps on the pings, so i can know when the network fails. After sniffing the network , i can also see some RST packets, but i don't know if that is a normal behaviour (some applications do that to end a network communication). I have also noticed that the applications have problems communicating with the database, but im not even sure if this could be related or not. If you think so, i can post more info about it. I changed the connector on the tomcats to use the native one, but still the same. I need not even a solution to this, but maybe some guidance on how can i troubleshoot this better ¿Analyze threads, monitor mysql performance, sniff the traffic between apaches and tomcats? Ultimately, all i need is to balance the tomcat instances in Active/pasive mode, so if there is another way to do it, i could give it a try.

    Read the article

  • Remote installing an msi on citrix servers using WMI

    - by capn
    OK, I'm a C# programmer that is trying to streamline the deployment of a custom windows form app I inherited and built an installer for with WiX (this app will need to be reinstalled regularly as I'm making changes to it). I'm not really used to admin type things (or vbs, or WMI, or terminal servers, or Citrix, and even WiX and MSI are not things I usually deal with) but so far I put together some vbs and have an end goal in mind. The msi does work, and I've installed it from the mapped O: drive on my dev machine and while RDP'd to a citrix machine. End Goal: Deploy code written on my dev machine and compiled into an MSI (that I can improve upon within the confines of WiX and whatever the Windows Installer Engine allows) to the cluster of Citrix machines my users have access to. What am I missing in my script to get the MSI to execute on the remote machines? Layout: Machine A is my dev machine, and has the vbs script and the msi file (XP SP3) Machines C1 - C6 are the Citrix Servers that need the application installed them via the msi (Server 2003 R2 SP2) There is also optionally a shared network resource that all the machines can access. Script: 'Set WMI Constants Const wbemImpersonationLevelImpersonate = 3 Const wbemAuthenticationLevelPktPrivacy = 6 'Set whether this is installing to the debug Citrix Servers Const isDebug = true 'Set MSI location 'Network location yields error 1619 (This installation package could not be opened.) msiLocation = "\\255.255.255.255\odrive\Citrix Deployment\Setup.msi" 'Directory on machine A yields error 3 (file not found) 'msiLocation = "C:\Temp\Deploy\Setup.msi" 'Mapped network drive (on both machines) yield error 3 (file not found) 'msiLocation = "O:\Citrix Deployment\Setup.msi" 'Set login information strDomain = "MyDomain" Wscript.StdOut.Write "user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "password:" strPassword = objPassword.GetPassword() 'Names of Citrix Servers Dim citrixServerArray If isDebug Then citrixServerArray = array("C4") Else 'citrixServerArray = array("C1","C2","C3","C5","C6") End If 'Loop through each Citrix Server For Each citrixServer in citrixServerArray 'Login to remote computer Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = objLocator.ConnectServer(citrixServer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) 'Set Remote Impersonation level objWMIService.Security_.ImpersonationLevel = wbemImpersonationLevelImpersonate objWMIService.Security_.AuthenticationLevel = wbemAuthenticationLevelPktPrivacy 'Reference to a process on the machine Dim objProcess : Set objProcess = objWMIService.Get("Win32_Process") 'Change user to install for terminal services errReturn = objProcess.Create _ ("cmd.exe /c change user /install", Null, Null, intProcessID) WScript.Echo errReturn 'Install MSI here 'Reference to a product on the machine Set objSoftware = objWMIService.Get("Win32_Product") 'All users set in option parameter, I'm led to believe that the third parameter is actually ignored 'http://www.webmasterkb.com/Uwe/Forum.aspx/vbscript/2433/Installing-programs-with-VbScript errReturn = objSoftware.Install(msiLocation,"ALLUSERS=2 REBOOT=ReallySuppress",True) Wscript.Echo errReturn 'Change user back to execute errReturn = objProcess.Create _ ("cmd.exe /c change user /execute", Null, Null, intProcessID) WScript.Echo errReturn Next I also tried using this to install, it doesn't return an error code, but doesn't install the msi either, and it makes me wonder if the change user /install command is even really working. errReturn = objProcess.Create _ ("cmd.exe /c msiexec /i ""O:\Citrix Deployment\Setup.msi"" /quiet") Wscript.Echo errReturn

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • List repositories from multiple projects in Trac using mod_python

    - by Steffen Eriksen
    Currently working on a customized webpage that shows the available projects I have in Trac (1.0.1). I am using mod_python to connect the trac interface. I found a standard page for this, but it didn't show listing of repositories. The page showed some variables to link to the different projects, but I can't find variables to the different repositories inside the projects. I have set up the webpage from reading this: http://trac.edgewall.org/wiki/TracInterfaceCustomization (under Site Appearance) Short summary; editing ../conf.d/trac.conf: PythonOption TracEnvParentDir /parent/dir/of/projects PythonOption TracEnvIndexTemplate /path/to/template And making a template file I can edit at /path/to/template: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://genshi.edgewall.org/" xmlns:xi="http://www.w3.org/2001/XInclude"> <head> <title>Available Projects</title> </head> <body> <h1>Available Projects</h1> <ul> <dl> <li py:for="project in projects" py:choose=""> <a py:when="project.href" href="$project.href" title="$project.description">$project.name</a> ## <dd> WANT TO ADD CODE HERE! </dd> <py:otherwise> <small>$project.name: <em>Error</em> <br /> ($project.description)</small> </py:otherwise> </li> </dl> </ul> </body> </html> So... The code I want to add is something like: <dd py:for="repos in project.repository" py:choose=""> <a py:when="repos.href" href="$repos.href"> $repos.name</a> </dd> I can't figure out where to add the variables, or if there already exists some variables I can use. After searching through the files it seemed like main.py had something to do with the variables (/usr/local/Trac-1.0.1/trac/web/main.py), but at first look it didn't seem easy to just add more variables. Is there a simple way to find the rest of the variables ? And how hard is it to add more variables? Will it perhaps be easier to do this an alternative way ? All I need is to link to the repositories dynamically

    Read the article

  • Windows cannot find the host name "download.microsoft.com" using DNS

    - by joedotnot
    When trying to download a file found on the Microsoft downloads center that starts with, for example, http://download.microsoft.com/download/6/8/7/(some_GUID)/(some_file_name.ext) i get a timeout with "Internet Explorer cannot display the webpage". More information says: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. Diagnose Connection problems says: Windows cannot find the host name "download.microsoft.com" using DNS Bear with me while i expand on the problem: It all started when i tried to download Windows XP mode for my Windows 7 machine. I went to the virtual PC site, then thru the motions of Windows Genuine Advantage which validated ok, but when it redirects to grab the file just times out with above error. (NB: i also tried with the latest Chrome and Firefox but no use due to the Genuine Advantage stuff, so i decided to stick with IE). I am behind an ADSL2+ modem router connecting via wireless (Win 7 Pro laptop); so i hop over to the desktop connected via ethernet (Vista Business), and same result; begin to think site download.microsoft.com site is down. So i give it a break an read up on EDNS, flushing the cache, hosts file, etc... Try again an hour later on the Win 7 machine, still no go; so i turn off the Win 7 (software) firewall, and lo and behold, i can connect and grab any files from download.microsoft.com; (...nice, so we have a Micro$0ft firewall preventing access to a Micro$0ft website, no wonder my auto-updates kept failing but that's another story). But i still am not happy that the desktop connected via ethernet still cannot get to download.microsoft.com, even though i turned off all firewalls, defenders, anti-virus, etc. What is so special / specific about the url download.microsoft.com, any other site is ok, including www.microsoft.com. Any networking guru know what's REALLY going on, and how can i get the desktop to connect? Ping download.microsoft.com - Ping request could not find host download.microsoft.com. Please check the name and try again. Ping google.com or even www.microsoft.com works gives me an IP address. NB: On the wireless laptop ping download.microsoft.com works, i get xxxx.ms.akamai.net [202.7.177.33].

    Read the article

  • PHP at the root directory using Ngnix on Linode and Ubuntu 12.04

    - by Steve Kinney
    I originally set up my Linode to use it with the Sinatra applications using Phusion Passenger that I was developing and I have it working great for that. However, as time goes on, I find myself needing just a wee bit of PHP to do a server-side thing here or there. My basic set up was based off of this Linode recipe (I copied and pasted the parts that I needed—I did not install Redis and Node). If I go to http://scholarsnyc.com/index.php everything works great. If I just go the base URL however, I get a 403 Forbidden error (I have a vanilla HTML page there for now). I've played with file permissions and the same file will work if I call it directly. I've done my homework and nothing I try seems to work. I'm sure there is an obvious error. I'm also sure that there are some rookie mistakes in my Nginx configuration (some of those mistakes are the artifacts of trying different fixes from my research. user www-data www-data; worker_processes 1; events { worker_connections 1024; } upstream php { server 127.0.0.1:9001; } http { passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12; passenger_ruby /usr/local/bin/ruby; include mime.types; default_type application/octet-stream; index index.php index.html index.htm; sendfile on; keepalive_timeout 65; server { server_name localhost scholarsnyc.com www.scholarsnyc.com; root /srv/www/scholarsnyc.com/public; location / { index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { server_name data.scholarsnyc.com; root /srv/www/data.scholarsnyc.com/public; passenger_enabled on; } server { server_name tech.scholarsnyc.com; root /srv/www/tech.scholarsnyc.com/public; location / { root /srv/www/tech.scholarsnyc.com/public; index index.php index.html index.htm; } } } Any other optimizations are also appreciated. I literally don't know what to do at this point.

    Read the article

  • Setting font size of Closed Captions on iPhone using ffmpeg or mencoder

    - by forthrin
    Does anyone know how to either: Make ffmpeg set subtitle font size in the output video file Make mencoder produce an iPhone-compatible video file (with subtitles) I finally found out how to get Closed Captions video on iPhone, with mkv and srt files as source material. The secret was using the mov_text subtitle codec in ffmpeg (and turning on Closed Captions in the iPhone settings of course): ffmpeg -y -i in.mkv -i in.srt -map 0:0 -map 0:1 -map 1:0 -vcodec copy -acodec aac -ab 256k -scodec mov_text -strict -2 -metadata title="Title" -metadata:s:s:0 language=eng out.mp4 However, the font size appears very small on the iPhone, and I can't find out how to set it with ffmpeg (the iPhone has no option for this). I found out that mencoder has a -subfont-text-scale option, but I don't have a lot of experience with this program. The following, my best attempt so far, produces an output file which is not playable on the iPhone. sudo port install mplayer +mencoder_extras +osd mencoder in.mkv -sub in.srt -o out.mp4 -ovc copy -oac faac -faacopts br=256:mpeg=4:object=2 -channels 2 -srate 48000 -subfont-text-scale 10 -of lavf -lavfopts format=mp4 PS! As requested, here is the output from mencoder: 192 audio & 400 video codecs success: format: 0 data: 0x0 - 0xb64b9d2f libavformat version 54.6.101 (internal) libavformat file format detected. [matroska,webm @ 0x1015c9a50]Unknown entry 0x80 [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (ac3), -aid 0, -alang eng VIDEO: [H264] 1280x544 0bpp 49.894 fps 0.0 kbps ( 0.0 kbyte/s) [V] filefmt:44 fourcc:0x34363248 size:1280x544 fps:49.894 ftime:=0.0200 ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.23.100 (internal) AUDIO: 48000 Hz, 2 ch, s16le, 448.0 kbit/29.17% (ratio: 56000->192000) Selected audio codec: [ffac3] afm: ffmpeg (FFmpeg AC-3) ========================================================================== ** MUXER_LAVF ***************************************************************** REMEMBER: MEncoder's libavformat muxing is presently broken and can generate INCORRECT files in the presence of B-frames. Moreover, due to bugs MPlayer will play these INCORRECT files as if nothing were wrong! ******************************************************************************* OK, exit. videocodec: framecopy (1280x544 0bpp fourcc=34363248) VIDEO CODEC ID: 28 AUDIO CODEC ID: 15002, TAG: 0 Writing header... [mp4 @ 0x1015c9a50]Codec for stream 0 does not use global headers but container format requires global headers [mp4 @ 0x1015c9a50]Codec for stream 1 does not use global headers but container format requires global headers Then the following repeats itself for every frame: Pos: 0.0s 1f ( 2%) 0.00fps Trem: 0min 0mb A-V:0.000 [0:0] [mp4 @ 0x1015c9a50]malformated aac bitstream, use -absf aac_adtstoasc Error while writing frame. I recognize -absf aac_adtstoasc as an ffmpeg option (does mencoder spawn ffmpeg?), but I don't know how to pass this option on (my hunch is this is not even the origin of the problem).

    Read the article

  • What's the best scenario for using a wireless router with Comcast Business Class

    - by Buck
    Just had Comcast Business Class internet installed (usage details at bottom of post). During the call to order I asked about the hardware they'd be providing and was told it was a docsis 3 modem that I'd have to pay $7.00/month for. Figuring I'd have to buy a router anyway, I decided to get my own modem - a Surfboard SB6121 Docsis 3. I called in to tech support to ask some questions and learned that the modem they would have provided DID have a router built in. It's an SMCD3G-CCR. It's not wireless (we need wireless). The guy explained that it was better to have their hardware here because if there's a problem with our service and we're using our own hardware, chances are they'll blame it on our hardware and do nothing since they don't support it. He explained that I could still hang my own wireless router off their modem/router and if we ever had any service problems, we'd be able to plug directly into their hardware and they'd be able to tell where the problem is and they wouldn't be able to pawn it off onto "customer provided equipment". That all said, a few questions: 1. Am I better off returning my Surfboard modem and getting the Comcast one? If I get a wireless router and plug into one of the ethernet ports of the Comcast device, should I NOT plug anything else into the Comcast device since it would be a different network from anything connecting via the wireless router? Is that correct? Given that I know VERY LITTLE about networking and setting up hardware like this... since I need wireless and will HAVE to get a wireless router to work with this Comcast device, do I need to do anything with the settings of the Comcast device? Do I use security on the Comcast device or the wireless router or both? Any suggestions or anything I need to think about, given this scenario, in order to use a business-type voip service like RingCentral or Jive or Nextiva? Any recommendations on a wireless router for this scenario? We are running 2 PCs (possibly 3-4 in the future) - could be wired for the time being if needed but would prefer wireless; would like to have a networked hard drive and a networked printer; NEED business-type VOIP service asap for 2 phone lines. Would like to hook up some IP cameras at some point (but not the kind that require static IPs since I don't have one nor do I plan to pay Comcast another $15/month for one). I don't have or plan to have any type of web servers or anything like that. Want to use WPA or WPA2 security and take advantage of the NAT feature of the router for additional protection (that's the extent of my networking knowledge).

    Read the article

  • installed mongo using brew but stuck at prompt

    - by user50946
    I have installed mongo using brew on my mac. When I give mongo command I see this MongoDB shell version: 2.4.6 connecting to: test but it stays there and never give me command prompt back anyone else noticed something like this I have reinstalled with no luck. The issue is persistent thanks Logs ***** SERVER RESTARTED ***** Fri Oct 18 08:11:48.360 [initandlisten] MongoDB starting : pid=2081 port=27017 dbpath=/usr/local/var/mongodb 64-bit host=Asims-MacBook-Air.local Fri Oct 18 08:11:48.360 [initandlisten] db version v2.4.6 Fri Oct 18 08:11:48.360 [initandlisten] git version: nogitversion Fri Oct 18 08:11:48.360 [initandlisten] build info: Darwin minimountain.local 12.5.0 Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 Fri Oct 18 08:11:48.360 [initandlisten] allocator: tcmalloc Fri Oct 18 08:11:48.360 [initandlisten] options: { bind_ip: "127.0.0.1", config: "/usr/local/etc/mongod.conf", dbpath: "/usr/local/var/mongodb", logappend: "true", logpath: "/usr/local/var/log/mongodb/mongo.log" } Fri Oct 18 08:11:48.361 [initandlisten] journal dir=/usr/local/var/mongodb/journal Fri Oct 18 08:11:48.361 [initandlisten] recover : no journal files present, no recovery needed Fri Oct 18 08:11:48.398 [websvr] admin web console waiting for connections on port 28017 Fri Oct 18 08:11:48.398 [initandlisten] waiting for connections on port 27017 Fri Oct 18 08:12:03.279 [signalProcessingThread] got signal 1 (Hangup: 1), will terminate after current cmd ends Fri Oct 18 08:12:03.279 [signalProcessingThread] now exiting Fri Oct 18 08:12:03.279 dbexit: Fri Oct 18 08:12:03.279 [signalProcessingThread] shutdown: going to close listening sockets... Fri Oct 18 08:12:03.279 [signalProcessingThread] closing listening socket: 9 Fri Oct 18 08:12:03.279 [signalProcessingThread] closing listening socket: 10 Fri Oct 18 08:12:03.280 [signalProcessingThread] closing listening socket: 11 Fri Oct 18 08:12:03.280 [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: going to flush diaglog... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: going to close sockets... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: waiting for fs preallocator... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: lock for final commit... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: final commit... Fri Oct 18 08:12:03.282 [signalProcessingThread] shutdown: closing all files... Fri Oct 18 08:12:03.282 [signalProcessingThread] closeAllFiles() finished

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Issue using a "used" SSD as a Windows 8.1 Boot Drive

    - by EpiGrad
    So, I'm something of a Mac person, but decided to take a stab at this whole "build yourself a PC" thing - right now, the thing is assembled, posts just fine, and can get to the BIOS. The problem is the drive I want to use - I intended to use a 80 GB Corsair SSD I've had sitting around as the boot drive, and a new Samsung SSD for games and the like. So I boot using a Windows 8.1 install USB stick, and if the Samsung drive is plugged in, it happily offers to install Windows on it. The Corsair drive though, it's flipped out - I reformatted it as a blank NTFS drive (it was HFS for Mac purposes) and the BIOS can't see it, nor can the Windows installer. What's wrong, and how do I fix it? The tools at my disposal are: The current ASUS BIOS that came with my motherboard (a Z87I-Deluxe), a Mac running the latest OS X which can also boot to Windows 7 if needed via either Parallels or Bootcamp. Update 1: Update: Based on a friend's suggestion to switch SATA ports, Windows 8.1's installer can now see the drive as Drive 0, Partition 1, a 83.8 GB "Primary" partition. But when I click it and hit "Next", I get the following error: "We couldn't create a new partition or locate an existing one. For more information, see the Setup log files" - not that it gives any clue how to access those. Update 2: Following a trail of Google suggestions, I ended up going into advanced tools and just reformatting the drive as follows: Start DISKPART. Type LIST DISK and identify your SSD disk number (from 0 to n disks). Type SELECT DISK <n> where <n> is your SSD disk number. Type CLEAN Type CREATE PARTITION PRIMARY Type ACTIVE Type FORMAT FS=NTFS QUICK Type ASSIGN Type EXIT twice (one to get out of DiskPart, the other to exit the command line tool) Per these instructions. This goes well enough, but now I can select the disk for installation, and I get a new error: "Windows 8 cannot be installed to this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only be installed to GP disks." So, Googling that, I do the following: select disk 0 clean convert gpt exit ...and we might have fixed it. Windows is at least trying to install now.

    Read the article

  • How to update data in the user information list when using FBA

    - by Flo
    I've got to support a SharePoint web application which uses FBA with a custom membership and a custom role provider to authenticate the user against two different LDAPs. The user data are only stored in the user information lists. The SSP user profiles are not used. Now one of the users got married and therefore her surname got changed in the LDAP (the one where her information are stored). But this change doesn't get provisioned into the user information list. I wondering what option I have to provision changes of user data to the user information list. I've already tried to update the last name of the user manually, but it seems as if certain information like surname, first name are not editable in the user information list. I tried to edit them as a site administrator. So what option do I have to solve this problem? Being able to edit the information per hand would also be a solution but of course not the most preferred one.

    Read the article

  • The type or namespace name 'UI' cannot be found for using System.Web.UI

    - by user284523
    I am following the tut here http://msdn.microsoft.com/en-us/library/h59db326.aspx # Create an App_Code directory directly under the root directory of your Web site (also called Web application root). # Copy the source file for the control (WelcomeLabel.cs or WelcomeLabel.vb) to the App_Code directory. But I got the error on using System.Web.UI; using System.Web.UI.WebControl; I have tried to add System.Web as Reference but that still doesn't resolve the stuff. I can't see System.Web.UI and System.Web.UI.WebControl in the reference lists is this normal ? Thanks.

    Read the article

  • How to Create Site using Sharepoint web services?

    - by Pari
    Hi All, I am tring to create site on sharepoint programatically using Sharepoint Web Services.(C#). I tried Admin.asmx service (CreateSite method). But it's showing error: "An unhandled exception of type 'System.InvalidOperationException' occurred in System.Web.Services.dll". I tried with all possible parameters. Curremtly referring Below Links: http://www.oliebol.org/blog/Lists/Posts/Post.aspx?ID=6 http://msdn.microsoft.com/en-us/library/administration.admin.createsite.aspx My Code: Admin admService = new Admin(); admService.Credentials = new NetworkCredential(username,password,domain); admService.Url = "http://mychserver/_vti_adm/admin.asmx"; admService.PreAuthenticate = true; try { String SitePath = "http://myserver/SiteDirectory/SharepointSampleSite"; admService.CreateSite(SitePath,"First Site", "Sample Site", 1033, "STS#0", "Domain\\username",username,userid, "", ""); } catch (System.Web.Services.Protocols.SoapException ex) { MessageBox.Show("Message:\n" + ex.Message + "\nDetail:\n" +ex.Detail.InnerText + "\nStackTrace:\n" + ex.StackTrace); } Thanx,

    Read the article

  • How to drag items between 2 sorted lists with jQuery?

    - by sa125
    Hi - I'm trying to implement drag/drop/sort between 2 list elements: <ul id="first"> <li>item 1</li> <li>item 2</li> <li>item 3</li> </ul> <ul id="second"> <li>item 4</li> <li>item 5</li> <li>item 6</li> </ul> Basically I just want to be able to pass items between the lists and sort the items in each list. What's the simplest way to implement this using jQuery?

    Read the article

  • How to convert linq entitySet AND CHILDREN to lists?

    - by Abe Miessler
    I ran into an error when trying to serialize a linq entitySet. To get around this i converted the entitySet to a list. The problem I have run into now is that it's child entity sets are not converting to a list and when I try to serialize the parent those are now throwing an error. Does anyone know of a way to convert a linq entitySet AND it's children to lists? p.s. I'm new to linq so if any of this dosn't make sense let me know

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >