Search Results

Search found 12666 results on 507 pages for 'knowledge base'.

Page 447/507 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • LdapErr: DSID-0C0903AA, data 52e: authenticating against AD '08 with pam_ldap

    - by Stefan M
    I have full admin access to the AD '08 server I'm trying to authenticate towards. The error code means invalid credentials, but I wish this was as simple as me typing in the wrong password. First of all, I have a working Apache mod_ldap configuration against the same domain. AuthType basic AuthName "MYDOMAIN" AuthBasicProvider ldap AuthLDAPUrl "ldap://10.220.100.10/OU=Companies,MYCOMPANY,DC=southit,DC=inet?sAMAccountName?sub?(objectClass=user)" AuthLDAPBindDN svc_webaccess_auth AuthLDAPBindPassword mySvcWebAccessPassword Require ldap-group CN=Service_WebAccess,OU=Groups,OU=MYCOMPANY,DC=southit,DC=inet I'm showing this because it works without the use of any Kerberos, as so many other guides out there recommend for system authentication to AD. Now I want to translate this into pam_ldap.conf for use with OpenSSH. The /etc/pam.d/common-auth part is simple. auth sufficient pam_ldap.so debug This line is processed before any other. I believe the real issue is configuring pam_ldap.conf. host 10.220.100.10 base OU=Companies,MYCOMPANY,DC=southit,DC=inet ldap_version 3 binddn svc_webaccess_auth bindpw mySvcWebAccessPassword scope sub timelimit 30 pam_filter objectclass=User nss_map_attribute uid sAMAccountName pam_login_attribute sAMAccountName pam_password ad Now I've been monitoring ldap traffic on the AD host using wireshark. I've captured a successful session from Apache's mod_ldap and compared it to a failed session from pam_ldap. The first bindrequest is a success using the svc_webaccess_auth account, the searchrequest is a success and returns a result of 1. The last bindrequest using my user is a failure and returns the above error code. Everything looks identical except for this one line in the filter for the searchrequest, here showing mod_ldap. Filter: (&(objectClass=user)(sAMAccountName=ivasta)) The second one is pam_ldap. Filter: (&(&(objectclass=User)(objectclass=User))(sAMAccountName=ivasta)) My user is named ivasta. However, the searchrequest does not return failure, it does return 1 result. I've also tried this with ldapsearch on the cli. It's the bindrequest that follows the searchrequest that fails with the above error code 52e. Here is the failure message of the final bindrequest. resultcode: invalidcredentials (49) 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 This should mean invalid password but I've tried with other users and with very simple passwords. Does anyone recognize this from their own struggles with pam_ldap and AD? Edit: Worth noting is that I've also tried pam_password crypt, and pam_filter sAMAccountName=User because this worked when using ldapsearch. ldapsearch -LLL -h 10.220.100.10 -x -b "ou=Users,ou=mycompany,dc=southit,dc=inet" -v -s sub -D svc_webaccess_auth -W '(sAMAccountName=ivasta)' This works using the svc_webaccess_auth account password. This account has scan access to that OU for use with apache's mod_ldap.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • How to stop windows resizing when the monitor display channel is turned off / switched to different source

    - by Heartspeace
    Have a new 6870 ati radeon adapter with its drivers set to 1080p 60hz resolution hooked up to a 2008 47" high end Samsung HDMI based TV. However, when the tv is turned to a different hdmi input -(when I come back into windows) somehow Windows decides to resize all the open apps to a lower resolution - including some of the side docked hidden pop-outs. When it resizes those though - it just sticked the pop-outs in the middle of the screen and all the resized windows from the open applications in the top left corner - all of them stacked on top of each other and resized to the smaller resolution. The things that seem to be ok after returning are the icons on the desktop, the taskbar, and the sidebar. Anyone have any knowledge of 1) how this happens 2) why it happens 3) how to stop it from resizing the applications and some of the docked pop-outs (they are not really resized after returning - they are just stuck in the middle of the screen approximately where they would be if the right or bottom sidebar should be if the screen was resized to that lower resolution). My hypothesis is that upon losing HDMI signal - that Windows is told by something (driver, or windows itself) that the resolution to be without a signal being present (noting that HDMI signals and handshakes are two way on HDMI devices. If it loses the signal or the tv is switched to another device - then the display adapter must figure that out and tell Windows or figures it out and designs randomly to change the display size). Any and all help is most appreciated. I asked AMD/ATI - but they said they don't know why or how this is happening. I was hoping that maybe this is THE place that the super users truly go to - those that develop display adapter drivers, or that dive deeply into these areas of windows. If there is better sites or just competing sites - please advise - noting I have already written AMD/ATI. HP Response / Additions 4/7/2011 It is really nice to get your reply Shinrai. (BTW is it proper etiquette on these forums to have a discussion?) Yet 'only one issue' - I am using a single display in this case - so Windows doesn't move application windows to another desktop. Windows (or something) decides to shrink the desktop it currently has and resize all windows to the maximum size of the desktop. As such I would be glad if Windows would just keep the current size of the one desktop that is in operation. I also know that this does NOT happen on monitors connected with DVI. There I have had one and two monitors setup and it doesn't resize those screens at all when disconnecting monitors, turning them off, whatever... they stay solid - everything in place - to such an extent that if you forgot the other monitor is off - you will have troubles finding some windows without using one of the control app utilities. So if I could even get the HDMI handling by Windows (or the display driver) ( 1] which is doing this anyway the display driver or Windows - and 2] where is that other resolution size (1024x768) coming from - its not the smallest and its not the largest?) to be having like DVI - Life would be golden (for this aspect anyway). ** found others with same problem in this thread: http://hardforum.com/showthread.php?t=1507324 Thanks, HP

    Read the article

  • Routing table with two NIC adapters in libvirt/KVM

    - by lzap
    I created a virtual NAT network (192.168.100.0/24 network) in my libvirt and new guest with two interfaces - one in this network, one as bridged (10.34.1.0/24 network) to the local LAN. The reason for that is I need to have my own virtual network for my DHCP/TFTP/DNS testing and still want to access my guest externally from my LAN. On both networks I have working DHCP, both giving them IP addresses. When I setup NAT port forwarding (e.g. for ssh), I can connect to the eth0 (virtual network), everything is fine. But when I try to access the eth1 via bridged interface, I have no response. I guess I have problem with my routing table - outgoing packets are routed to the virtual NAT network (which has access to the machine I am connecting from - I can ping it). But I am not sure if this setup is correct. I think I need to add something to my routing table. # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:B4:A7:5F inet addr:192.168.100.14 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:feb4:a75f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16468 errors:0 dropped:27 overruns:0 frame:0 TX packets:6081 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22066140 (21.0 MiB) TX bytes:483249 (471.9 KiB) Interrupt:11 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 52:54:00:DE:16:21 inet addr:10.34.1.111 Bcast:10.34.1.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fede:1621/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34 errors:0 dropped:0 overruns:0 frame:0 TX packets:189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4911 (4.7 KiB) TX bytes:9 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.34.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0 Network I am trying to connect from is different than network the hypervisor is connected to: 10.36.0.0. But it is accessible from that network. So I tried to add new route rule: route add -net 10.36.0.0 netmask 255.255.0.0 dev eth1 And it is not working. I thought setting correct interface would be sufficient. What is needed to get my packets coming through?

    Read the article

  • Windows 7 losing one of my displays after restart

    - by j_kubik
    I have an Intel DZ68BC motherboard with Intel HD graphics card using two monitors (on DVI and on HDMI » VGA). My friend asked me to test if his NVIDIA graphics card works well on my computer (at his it was doing some trouble), so I inserted it in my computer, installed the NVIDIA driver and it worked quite well. Then I removed it, uninstalled everything NVIDIA-related I could find and switched monitors back to my Intel card. Since then after every system start/restart, the system sees only monitor on HDMI » VGA connector, completely ignoring the DVI monitor. I noticed that installing the Intel video drivers causes the system to recognize the second monitor if I don't immediately reboot. After a reboot, the system recognizes only the HDMI » VGA monitor. I also tried starting in safe-mode and using DriveSweeper to remove the remains of NVIDIA drivers. While it seems that some drivers were removed, the situation didn't change. Now I am out of ideas and I really wouldn't like to reinstall the system (again...). I also tried restoring the system to the state before this whole story, but it also didn't change anything. EDIT: I am still trying to troubleshoot this problem. The only point that I could start was driver re-instalation. I traced down the part that restores right settings to a call: C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\x64\Drv64.exe -driverinf "C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\Graphics\igdlh64.inf" -flags 20 -keypath "Software\Intel\Difx64" This call fixes my displays, and as workaround, I will add it for now to my autorun. I am still looking for better solution anyway... EDIT2: Using DriverView i made a list of currently used drivers both before and after fixing my display using above command. Then i compared logs: No drivers were removed by fixing command. Drivers added by fixing command: MS Remote Access serial network driver (asyncmac.sys) security processor (spsys.sys) Drivers that changed base address (indicates driver-reload?) Canonical Display Driver (cdd.dll) Intel Graphics Kernel Mode Driver (igdkmd64.sys) Monitor Driver (monitor.sys) Added drivers seem rather unrelated to the problem to me, reloaded drivers are just a cnsequence of installing new driver file so there is not much to go here... I really cannot make heads or tails out of it...

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • Linux service --status-all shows "Firewall is stopped." what service does firewall refer to?

    - by codewaggle
    I have a development server with the lamp stack running CentOS: [Prompt]# cat /etc/redhat-release CentOS release 5.8 (Final) [Prompt]# cat /proc/version Linux version 2.6.18-308.16.1.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) #1 SMP Tue Oct 2 22:50:05 EDT 2012 [Prompt]# yum info iptables Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.anl.gov * extras: centos.mirrors.tds.net * rpmfusion-free-updates: mirror.us.leaseweb.net * rpmfusion-nonfree-updates: mirror.us.leaseweb.net * updates: mirror.steadfast.net Installed Packages Name : iptables Arch : x86_64 Version : 1.3.5 Release : 9.1.el5 Size : 661 k Repo : installed .... Snip.... When I run: service --status-all Part of the output looks like this: .... Snip.... httpd (pid xxxxx) is running... Firewall is stopped. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) ....Snip.... iptables has been loaded to the kernel and is active as represented by the rules being displayed. Checking just the iptables returns the rules just like status all does: [Prompt]# service iptables status Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) .... Snip.... Starting or restarting iptables indicates that the iptables have been loaded to the kernel successfully: [Prompt]# service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] [Prompt]# service iptables start Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] I've googled "Firewall is stopped." and read a number of iptables guides as well as the RHEL documentation, but no luck. As far as I can tell, there isn't a "Firewall" service, so what is the line "Firewall is stopped." referring to?

    Read the article

  • How to allow unprivileged apache/PHP to do a root task (CentOS)

    - by Chris
    I am setting up a sort of personal dropbox for our customers on a CentOS 6.3 machine. The server will be accessible thru SFTP and a proprietary http service base on PHP. This machine will be in our DMZ so it has to be secure. Because of this I have apache running as an unprivileged user, hardened the security on apache, the OS, PHP, applied a lot of filtering in iptables and applied some restrictive TCP Wrappers. Now you might have suspected this one was coming, SELinux is also set to enforcing. I'm setting up PAM to use MySQL so my users in the web application can login. These users will all be in a group that can use SSH only for SFTP and users will be chrooted to their own 'home' folder. To allow this SELinux wants the folders to have the user_home_t tag. Also the parent directory needs to be writable by root only. If these restrictions are not met SELinux will kill the SSH pipe immediately. The files that need to be accessible thru both http and SFTP so I have made a SELinux module to allow Apache to search/attr/read/write etc. to directories with the user_home_dir_t tag. As sftp users are stored in MySQL I want to setup their home dirs upon user creation. This is a problem since Apache has no write access to the /home dir, it's only writable by root since it's required to keep SELinux and OpenSSH happy. Basically I need to let Apache do only a few tasks as root and only within /home. So I need to somehow elevate the privileges temporarily or let root do these tasks for apache instead. What I need to have apache do with root privileges is the following. mkdir /home/userdir/ mkdir /home/userdir/userdir chmod -R 0755 /home/userdir umask 011 /home/userdir/userdir chcon -R -t user_home_t /home/userdir chown -R user:sftp_admin /home/userdir/userdir chmod 2770 /home/userdir/userdir This would create a home for the user, now I have an idea that might work, cron. That would mean the server needs to check for users that have no home every minute, then when creating users the interface would freeze for an average of 30 seconds before the account creation can be confirmed which I do not prefer. Does anybody know if something can be done with sudoers? Or any other idea's are welcome... Thanks for your time!

    Read the article

  • Excel 2010: dynamic update of drop down list based upon datasource validation worksheet changes

    - by hornetbzz
    I have one worksheet for setting up the data sources of multiple data validation lists. in other words, I'm using this worksheet to provide drop down lists to multiple other worksheets. I need to dynamically update all worksheets upon any of a single or several changes on the data source worksheet. I may understand this should come with event macro over the entire workbook. My question is how to achieve this keeping the "OFFSET" formula across the whole workbook ? Thx To support my question, I put the piece of code that I'm trying to get it working : Provided the following informations : I'm using such a formula for a pseudo dynamic update of the drop down lists, for example : =OFFSET(MyDataSourceSheet!$O$2;0;0;COUNTA(MyDataSourceSheet!O:O)-1) I looked into the pearson book event chapter but I'm too noob for this. I understand this macro and implemented it successfully as a test with the drop down list on the same worksheet as the data source. My point is that I don't know how to deploy this over a complete workbook. Macro related to the datasource worksheet : Option Explicit Private Sub Worksheet_Change(ByVal Target As Range) ' Macro to update all worksheets with drop down list referenced upon ' this data source worksheet, base on ref names Dim cell As Range Dim isect As Range Dim vOldValue As Variant, vNewValue As Variant Dim dvLists(1 To 6) As String 'data validation area Dim OneValidationListName As Variant dvLists(1) = "mylist1" dvLists(2) = "mylist2" dvLists(3) = "mylist3" dvLists(4) = "mylist4" dvLists(5) = "mylist5" dvLists(6) = "mylist6" On Error GoTo errorHandler For Each OneValidationListName In dvLists 'Set isect = Application.Intersect(Target, ThisWorkbook.Names("STEP").RefersToRange) Set isect = Application.Intersect(Target, ThisWorkbook.Names(OneValidationListName).RefersToRange) ' If a change occured in the source data sheet If Not isect Is Nothing Then ' Prevent infinite loops Application.EnableEvents = False ' Get previous value of this cell With Target vNewValue = .Value Application.Undo vOldValue = .Value .Value = vNewValue End With ' LOCAL dropdown lists : For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value If .Validation.Type = 3 And .Validation.Formula1 = "=" & OneValidationListName And .Value = vOldValue Then ' Debug ' MsgBox "Address: " & Target.Address ' Change the cell value cell.Value = vNewValue End If End With Next cell ' Call to other worksheets update macros Call Sheets(5).UpdateDropDownList(vOldValue, vNewValue) ' GoTo NowGetOut Application.EnableEvents = True End If Next OneValidationListName NowGetOut: Application.EnableEvents = True Exit Sub errorHandler: MsgBox "Err " & Err.Number & " : " & Err.Description Resume NowGetOut End Sub Macro UpdateDropDownList related to the destination worksheet : Sub UpdateDropDownList(Optional vOldValue As Variant, Optional vNewValue As Variant) ' Debug MsgBox "Received info for update : " & vNewValue ' For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value ' If .Validation.Type = 3 And .Value = vOldValue Then If .Validation.Type = 3 And .Value = vOldValue Then ' Change the cell value cell.Value = vNewValue End If End With Next cell End Sub

    Read the article

  • Choosing parts for a high-spec custom PC - feedback required [closed]

    - by James
    I'm looking to build a high-spec PC costing under ~£800 (bearing in mind I can get the CPU half price). This is my first time doing this so I have plenty of questions! I have been doing lots of research and this is what I have come up with: http://pcpartpicker.com/uk/p/j4lE Usage: I will be using it for Adobe CS6, rendering in 3DS Max, particle simulations in Realflow and for playing games like GTA IV (and V when it comes out), Crysis 1/2, Saints Row The Third, Deus Ex HR, etc. Questions: Can you see any obvious problem areas with the current setup? Will it be sufficient for the above usage? I won't be doing any overclocking initially. Is it worth buying the H60 liquid cooler, or will the fan that comes with the CPU be sufficient? Is water cooling generally quieter? Is the chosen motherboard good for the current components? And is it future-proof? I read that the HDD is often the bottleneck when it comes to gaming. I presume this is true to other high-end applications? If so, is my selection good? I keep changing my mind about the GPU; first the 560, now the 660. Can anyone shed some light on how to choose? I read mixed opinions about matching the GPU to the CPU. Will the 560 or the 660 be sufficient for my required usage? Atm I'm basing my choice on the PassMark benchmarks and how much they cost. The specs on the GeForce website state that the 560 and the 660 both require 450W. Is this a good figure to base the wattage of my PSU on? If so, how do you decide? Do I really need 750W? The latest GTX 690 requires 650W. Is it a good idea to buy a 750W PSU now to future-proof myself?

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

  • BlueScreens on my ThinkPad with Windows 7 64 Bit and a SSD (CRITICAL_OBJECT_TERMINATION, ntoskernel.exe)

    - by pvorb
    I'm getting BlueScreens about every five days for more than three months. Here's an example: A problem has been detected and Windows has been shut down to prevent damage to your computer. The problem seems to be caused by the following file: ntoskrnl.exe CRITICAL_OBJECT_TERMINATION If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to make sure any new hardware or software is properly installed. If this is a new installation, ask your hardware or software manufacturer for any Windows updates you might need. If problems continue, disable or remove any newly installed hardware or software. Disable BIOS memory options such as caching or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select Advanced Startup Options, and then select Safe Mode. Technical Information: *** STOP: 0x000000f4 (0x0000000000000003, 0xfffffa80065f2b30, 0xfffffa80065f2e10, 0xfffff80002f9bf40) *** ntoskrnl.exe - Address 0xfffff80002c98d00 base at 0xfffff80002c19000 DateStamp 0x4d9fdd5b It's has always been the same BlueScreen message showing CRITICAL_OBJECT_TERMINATION, 0x000000f4, and ntoskrnl.exe. Of course the addresses change. My computer is a ThinkPad T400 (about 2 years old) with a SSD in it. I'm also running Windows 7 Professional 64 bit. When I bought my computer, it had a 250GByte SeaGate HDD in it, which I replaced by a 500GByte HDD by Western Digital. Last september I bought a Corsair F120 SSD and replaced the HDD by this SSD. Then I bought a LEICKE HDD adapter for the UltraBay II where I plugged in my 500GByte HDD. This configuration ran about half a year without any errors. After re-installing Windows this spring, I am getting regular BlueScreens. Sometimes my system runs for about 2 weeks without a BSOD, sometimes I get several BlueScreens a day. The only thing that I noticed is, that I'm always running Google Chrome when it happens. Is there anyone who has made his/her own bad experiences whith some of my components or is there anybody who can tell me if it would be helpful to send my notebook to Lenovo? Thank you very much for your help on my issue! Regards, Paul

    Read the article

  • [CentOS 4.8] nslookup resolves domains to IPs, but I can't get a response to pings to external servers

    - by Beco
    I have a fresh install of CentOS 4.8 running on an internal development server. I haven't done anything to it besides setting up sudoers and SSH. I can SSH into the server and from there resolve domains to IPs and ping internal servers, but for some reason I don't get any response from pinging external servers. The software firewall is disabled, and the problem is present with both static and DHCP-assigned network configurations. The network domain controller is a Windows Server 2003 box. $ nslookup google.com Server: 10.254.2.5 Address: 10.254.2.5#53 Non-authoritative answer: Name: google.com Address: 74.125.47.147 Name: google.com Address: 74.125.47.99 <etc...> 10.254.2.5 is the Win2K3 server. $ ping google.com PING google.com (74.125.47.106) 56(84) bytes of data. It just hangs here indefinitely. $ cat /etc/resolv.conf ; generated by /sbin/dhclient-script search <...snip...>.local nameserver 10.254.2.5 nameserver 10.254.2.124 10.254.2.124 is the backup DC server, which is currently off and tombstoned by this point. The snipped section is our company name. # ifconfig eth0 Link encap:Ethernet HWaddr <snip> inet addr:10.254.2.101 Bcast:10.254.2.255 Mask:255.255.255.0 inet6 addr: <snip>/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80066 errors:0 dropped:0 overruns:0 frame:0 TX packets:4421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7810133 (7.4 MiB) TX bytes:590550 (576.7 KiB) Interrupt:225 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8104 (7.9 KiB) TX bytes:8104 (7.9 KiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.254.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.254.2.5 0.0.0.0 UG 0 0 0 eth0 And, for good measure, a snapshot of the current ethernet config via the system-config-network GUI. Edit: I don't yet have enough rep to post images, so here's a link. Sorry! system-config-network snapshot I'm pretty green when it comes to setting up *nix dev servers and network configuration in general, so please let me know if I've left out critical information, or posted information I shouldn't have posted. Thanks!

    Read the article

  • No external network on ubuntu 9.10, though dns works..

    - by user29368
    Hi, I have a weird problem I cant solve. I have several computers, two with xubuntu 9.10 One of them, acting as a media server, has stopped to work when it comes to external network.. I can do for example: ping google.com Which gives me an ip adress back, like: name@Media:/etc$ ping google.com PING google.com (66.102.9.147) 56(84) bytes of data. That tells me it reaches the dns?, but I get no response at all... If I ping a local computer all works fine. I can also reach the computer via ssh without any problems. I have always used network manager, but now I uninstalled it and made the settings manually like this: /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.52 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 Still no luck. I have no specific settings for this one in my router, and all the other computers, including my win laptop works fine. This is very annoying since I cant even do an update or anything.. ifconfig looks like this: eth0 Link encap:Ethernet HWaddr 00:24:1d:9f:10:89 inet addr:192.168.1.52 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::224:1dff:fe9f:1089/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15410 errors:0 dropped:0 overruns:0 frame:0 TX packets:2693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1167398 (1.1 MB) TX bytes:694973 (694.9 KB) Interrupt:27 Base address:0xe000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2150 errors:0 dropped:0 overruns:0 frame:0 TX packets:2150 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:143456 (143.4 KB) TX bytes:143456 (143.4 KB) route -n like this Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 I do not know where the adress starting with 169.254 comes from.. Could that be a part of the problem? Hoping for some assistance since Im totally stuck here.. /george

    Read the article

  • Get Local IP-Address using Boost.Asio

    - by MOnsDaR
    Hey, I'm currently searching for a portable way of getting the local IP-addresses. Because I'm using Boost anyway I thought it would be a good idea to use Boost.Asio for this task. There are serveral examples on the net which should do the trick. Examples: Official Boost.Asio Documentation Some Asian Page I tried both codes with just slight modifications. The Code on Boost.Doc was changed to not resolve "www.boost.org" but "localhost" or my hostname instead. For getting the hostname I used boost::asio::ip::host_name() or typed it directly as a string. Additionally I wrote my own code which was a merge of the above examples and my (little) knowledge I gathered from the Boost Documentation and other examples. All the sources worked, but they did just return the following IP: 127.0.1.1 (Thats not a typo, its .1.1 at the end) I run and compiled the code on Ubuntu 9.10 with GCC 4.4.1 A colleague tried the same code on his machine and got 127.0.0.2 (Not a typo too...) He compiled and run on Suse 11.0 with GCC 4.4.1 (I'm not 100% sure) I don't know if it is possible to change the localhost (127.0.0.1), but I know that neither me or my colleague did it. ifconfig says loopback uses 127.0.0.1. ifconfig also finds the public IP I am searching for (141.200.182.30 in my case, subnet is 255.255.0.0) So is this a Linux-issue and the code is not as portable as I thought? Do I have to change something else or is Boost.Asio not working as a solution for my problem at all? I know there are much questions about similar topics on Stackoverflow and other pages, but I cannot find information which is useful in my case. If you got useful links, it would be nice if you could point me to it. Thanks in advance, MOnsDaR PS: Here is the modified code I used from Boost.Doc: #include <boost/asio.hpp> using boost::asio::ip::tcp; boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(boost::asio::ip::host_name(), ""); tcp::resolver::iterator iter = resolver.resolve(query); tcp::resolver::iterator end; // End marker. while (iter != end) { tcp::endpoint ep = *iter++; std::cout << ep << std::endl; }

    Read the article

  • Debugging XSLT with extension objects in Visual Studio 2010

    - by Alex Ciminian
    I'm currently working on a project that involves a lot of XSLT transformations and I really need a debugger (I have XSLTs that are 1000+ lines long and I didn't write them :-). The project is written in C# and makes use of extension objects: xslArg.AddExtensionObject("urn:<obj>", new <Obj>()); From my knowledge, in this situation Visual Studio is the only tool that can help me debug the transformations step-by-step. The static debugger is no use because of the extension objects (it throws an error when it reaches elements that reference their namespace). Fortunately, I've found this thread which gave me a starting point (at least I know it can be done). After searching MSDN, I found the criteria that makes stepping into the transform possible. They are listed here. In short: the XML and the XSLT must be loaded via a class that has the IXmlLineInfo interface (XmlReader & co.) the XML resolver used in the XSLTCompiledTransform constructor is file-based (XmlUriResolver should work). the stylesheet should be on the local machine or on the intranet (?) From what I can tell, I fit all these criteria, but it still doesn't work. The relevant code samples are posted below: // [...] xslTransform = new XslCompiledTransform(true); xslTransform.Load(XmlReader.Create(new StringReader(contents)), null, new BaseUriXmlResolver(xslLocalPath)); // [...] // I already had the xml loaded in an xmlDocument // so I have to convert to an XmlReader XmlTextReader r = new XmlTextReader(new StringReader(xmlDoc.OuterXml)); XsltArgumentList xslArg = new XsltArgumentList(); xslArg.AddExtensionObject("urn:[...]", new [...]()); xslTransform.Transform(r, xslArg, context.Response.Output); I really don't get what I'm doing wrong. I've checked the interfaces on both XmlReader objects and they implement the required one. Also, BaseUriXmlResolver inherits from XmlUriResolver and the stylesheet is stored locally. The screenshot below is what I get when stepping into the Transform function. First I can see the stylesheet code after stepping through the parameters (on template-match), I get this: If anyone has any idea why it doesn't work or has an alternative way of getting it to work I'd be much obliged :). Thanks, Alex

    Read the article

  • MKMapView memory usage grows out of control with setRegion: calls

    - by Kurt
    Hi, I have a single MKMapView instance that I have programmatically added to a UIView. As part of the UI, the user can cycle through a list of addresses and the map view is updated to show the correct map for each address as the user goes through them. I create the map view once, and simply change what it displays with setRegion:animated:. The problem is that each time the map is changed to show a new address, the memory usage of my program increases by 200K-500K (as reported by Memory Monitor in Instruments). According to Object Allocations, it appears that a lot of 1.0K Mallocs are happening each time, and the Extended Detail pane for these 1.0K allocations shows that the Responsible Caller is convert_image_data and the Extended Detail pane shows that this is the result of [MKMapTileView drawLayer:inContext:]. So, seems likely to me that the memory usage is due to MKMapView not freeing memory it uses to redraw the map each time. In fact, when I don't display the map at all (by not even adding it as a subview of my main UIView) but still cycle through the addresses (which changes various UILabels and other displayed info) the memory usage for the app does NOT increase. If I add the map view but never update it with setRegion:, the memory also does NOT increase when changing to a new address. One more bit of info: if I go to a new address (and therefore ask the map to display the new address) the memory jumps as described above. However, if I go back to an address that was already displayed, the memory does not jump when the map redraws with the old address. Also, this happens on iPad (real device) with 3.2 and on iPhone (again, real device) with 3.1.2. Here's how I initialize the MKMapView (I only do this once): CGRect mapFrame; mapFrame.origin.y = 460; // yes, magic numbers. just for testing. mapFrame.origin.x = 0; mapFrame.size.height = 500; mapFrame.size.width = 768; mapView = [[MKMapView alloc] initWithFrame:mapFrame]; mapView.delegate = self; [self.view insertSubview:mapView atIndex:0]; And in response to the user selecting an address, I set the map like so: MKCoordinateRegion region; MKCoordinateSpan span; span.latitudeDelta=kStreetMapSpan; // 0.003 span.longitudeDelta=kStreetMapSpan; // 0.003 region.center = address.coords; // coords is CLLocationCoordinate2D region.span = span; mapView.region.span = span; [mapView setRegion:region animated:NO]; Any thoughts? I've scoured the net but haven't seen mention of this problem, and I've reached the limits of my Instruments knowledge. Thanks for any ideas.

    Read the article

  • Multi-step Workflows: make Workflow A depend on results of Workflow B and/or Workflow C

    - by Joey
    I have been tasked with creating a Software Installation Approval section for our Intranet. When a person requests that a particular piece of software be installed on their workstation, we need to get IT approval and then business approval. Once those are obtained, it is to be installed. I am using Sharepoint Designer to do this. I have List A, where the user enters the information on the requested software. Workflow A then creates a Task in List B, which is then assigned to the IT approver. Workflow B works on List B on item creation, setting the due dates, titles, and other fields, and then pauses until the due date. The IT approver works with the business side and completes the task. Once List B task is complete, the item in List A should be marked as complete -- I have everything up to this point working fine. I want to make this more robust in 2 ways. As the only real option is to mark List B task as "completed", which essentially means "Approved", we have no way of really denying a request. What I want to add is the option to approve or deny a request through the task on List B -- if it is approved, I want the item in List A to continue to show "In Progress" with a custom status of "Approved", and I want to create a new task for software installation; once the installation task is marked as completed, then I want List A to show "Completed" with a status of "Installed". If it is denied, I want the item in List A to show as "Completed", with a status of "Denied". The problem is, I'm not even sure where to start making these modifications. Creating and modifying the custom status fields isn't that big of an issue -- I have messed around with this and I'm fairly confident I can do this easily. My main concern is that I know I will need a Workflow C, but I don't know where or how to trigger this to get the results I need. I've managed to get Workflows A and B working fine, but anything beyond this is really pushing the limit of my knowledge. It's probably obvious that I am rather new to Sharepoint workflows. I was very much thrust into this position and I am still feeling my way around. Thanks in advance for any help!

    Read the article

  • Are PackageMaker installations with preinstall scripts broken on Snow Leopard?

    - by Stu Thompson
    Everything worked on 10.5, but now my PackageMaker installation project is broken. I've been fighting a problem for a few days now, and either Snow Leopard (OS X 10.6.1) has broken PackageMaker installations I am lacking a very, very basic tidbit of knowledge To narrow down the problem, I've gotten to this point: Create a new PackageMaker installation Have it install a jpeg image into my home directoy Define a preinstall script that does nothing #/bin/sh exit 0 Run the above...and watch it fail with the below error message like clock work Sep 14 15:09:45 manoa installd[5620]: PackageKit: ----- Begin install ----- Sep 14 15:09:45 manoa installd[5620]: PackageKit: request=PKInstallRequest <1 packages, destination=/> Sep 14 15:09:45 manoa installd[5620]: PackageKit: packages=(\n "PKLeopardPackage <file://localhost/Users/stu/Desktop/asdf.pkg>"\n) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Extracting /Users/stu/Desktop/asdf.pkg (destination=/var/folders/Hb/HbXJFyEpFaupt5QyLN-pTk+++TI/-Tmp-/PKInstallSandbox-tmp/Root/~, uid=501) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Executing script "./preinstall" in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB Sep 14 15:09:46 manoa installd[5620]: PackageKit: *** launch path not accessible Sep 14 15:09:46 manoa installd[5620]: PackageKit: Install Failed: PKG: pre-install scripts for "test.test.5year_header.pkg"\nError Domain=PKInstallErrorDomain Code=112 UserInfo=0x100149430 "An error occurred while running scripts from the package “asdf”." {\n NSFilePath = "./preinstall";\n NSLocalizedDescription = "An error occurred while running scripts from the package \U201casdf\U201d.";\n NSURL = "file://localhost/Users/stu/Desktop/asdf.pkg";\n PKInstallPackageIdentifier = "test.test.5year_header.pkg";\n} Sep 14 15:09:46 manoa Installer[5614]: install:didFailWithError:Error Domain=PKInstallErrorDomain Code=112 UserInfo=0x1195917c0 "An error occurred while running scripts from the package “asdf”." Sep 14 15:09:46 manoa Installer[5614]: Install failed: The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance. Sep 14 15:09:47 manoa Installer[5614]: IFDInstallController 144040 state = 7 Sep 14 15:09:47 manoa Installer[5614]: Displaying 'Install Failed' UI. Sep 14 15:09:47 manoa Installer[5614]: 'Install Failed' UI displayed message:'The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.'. There is no file in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB/, which makes me think the problem is with PackageMaker, and not me. But I'm new to the world of OS X software installation, so doubts remain. So, the question: Is PackageMaker with a preinstall script broken on OS X 10.6? Or is there some requirement regarding preinstall scripts that I do not understand?

    Read the article

  • NHibernate: No persister error

    - by Mike
    Hello, In my quest to further my knowledge, I'm trying to get get NHibernate running. I have the following structure to my solution Core Class Library Project Infrastructure Class Library Project MVC Application Project Test Project In my Core project I have created the following entity: using System; namespace Core.Domain.Model { public class Category { public virtual Guid Id { get; set; } public virtual string Name { get; set; } } } In my Infrastructure Project I have the following mapping: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Core.Domain.Model" assembly="Core"> <class name="Category" table="Categories" dynamic-update="true"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="Name" length="100"/> </class> </hibernate-mapping> With the following config file: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string">server=xxxx;database=xxxx;Integrated Security=true;</property> <property name="show_sql">true</property> <property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property> <property name="cache.use_query_cache">false</property> <property name="adonet.batch_size">100</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <mapping assembly="Infrastructure" /> </session-factory> </hibernate-configuration> In my test project, I have the following Test [TestMethod] [DeploymentItem("hibernate.cfg.xml")] public void CanCreateCategory() { IRepository<Category> repo = new CategoryRepository(); Category category = new Category(); category.Name = "ASP.NET"; repo.Save(category); } I get the following error when I try to run the test: Test method Volunteer.Tests.CategoryTests.CanCreateCategory threw exception: NHibernate.MappingException: No persister for: Core.Domain.Model.Category. Any help would be greatly appreciated. I do have the cfg build action set to embedded resource. Thanks!

    Read the article

  • Objective-C wrapper API design methodology

    - by Wade Williams
    I know there's no one answer to this question, but I'd like to get people's thoughts on how they would approach the situation. I'm writing an Objective-C wrapper to a C library. My goals are: 1) The wrapper use Objective-C objects. For example, if the C API defines a parameter such as char *name, the Objective-C API should use name:(NSString *). 2) The client using the Objective-C wrapper should not have to have knowledge of the inner-workings of the C library. Speed is not really any issue. That's all easy with simple parameters. It's certainly no problem to take in an NSString and convert it to a C string to pass it to the C library. My indecision comes in when complex structures are involved. Let's say you have: struct flow { long direction; long speed; long disruption; long start; long stop; } flow_t; And then your C API call is: void setFlows(flow_t inFlows[4]); So, some of the choices are: 1) expose the flow_t structure to the client and have the Objective-C API take an array of those structures 2) build an NSArray of four NSDictionaries containing the properties and pass that as a parameter 3) create an NSArray of four "Flow" objects containing the structure's properties and pass that as a parameter My analysis of the approaches: Approach 1: Easiest. However, it doesn't meet the design goals Approach 2: For some reason, this seems to me to be the most "Objective-C" way of doing it. However, each element of the NSDictionary would have to be wrapped in an NSNumber. Now it seems like we're doing an awful lot just to pass the equivalent of a struct. Approach 3: Seems the cleanest to me from an object-oriented standpoint and the extra encapsulation could come in handy later. However, like #2, it now seems like we're doing an awful lot (creating an array, creating and initializing objects) just to pass a struct. So, the question is, how would you approach this situation? Are there other choices I'm not considering? Are there additional advantages or disadvantages to the approaches I've presented that I'm not considering?

    Read the article

  • Flash -> ByteArray -> AMFPHP -> Invalid Image !??

    - by undefined
    Hi, Im loading images into Flash and using JPGEncoder to encode the image to a ByteArray and send this to AMF PHP which writes out the bytearray to a file. This all appears to work correctly and I can download the resulting file in Photoshop CS4 absolutely fine. When i try to open it from the desktop or open it back in Flash it doesnt work... Picasa my default image browser says "Invalid" Here is the code i use to write the bytearray to a file - $jpg = $GLOBALS["HTTP_RAW_POST_DATA"]; file_put_contents($filename, $jpg); That's it ... I use the NetConnection class to connect and call the service, do I need to say Im sending jpg data? I assumed that JPGEncoder took care of that. How can I validate the bytearray before writing the file? Do I need to set MIME type or something .. excuse the slightly noob questions, a little knowledge can be a dangerous thing. Thanks --------------------------------------- PART II ------------------------------------------ Here is some code - 1) load the image into Flash player item.load(); function _onImageDataLoaded(evt:Event):void { var tmpFileRef:FileReference=FileReference(evt.target); image_loader=new Loader ; image_loader.contentLoaderInfo.addEventListener(Event.COMPLETE, _onImageLoaded); image_loader.loadBytes(tmpFileRef.data); } function _onImageLoaded(evt:Event):void { bitmap=Bitmap(evt.target.content); bitmap.smoothing=true; if (bitmap.width>MAX_WIDTH||bitmap.height>MAX_HEIGHT) { resizeBitmap(bitmap); } uploadResizedImage(bitmap); } function resizeBitmap(target:Bitmap):void { if (target.height>target.width) { target.width=MAX_WIDTH; target.scaleY=target.scaleX; } else if (target.width >= target.height) { target.height=MAX_HEIGHT; target.scaleX=target.scaleY; } } function uploadResizedImage(target:Bitmap):void { var _bmd:BitmapData=new BitmapData(target.width,target.height); _bmd.draw(target, new Matrix(target.scaleX, 0, 0, target.scaleY)); var encoded_jpg:JPGEncoder=new JPGEncoder(90); var jpg_binary:ByteArray=encoded_jpg.encode(_bmd); _uploadService=new NetConnection(); _uploadService.objectEncoding=ObjectEncoding.AMF3 _uploadService.connect("http://.../amfphp/gateway.php"); _uploadService.call("UploadService.receiveByteArray",new Responder(success, error), jpg_binary, currentImageFilename); } Many thanks for you help

    Read the article

  • Default Database Collations got messed up

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >