Search Results

Search found 18899 results on 756 pages for 'python c extension'.

Page 728/756 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • What are incentives (if any) to use WinRT instead of .Net?

    - by Ark-kun
    Let's compare WinRT with .Net .Net .Net is the 13+ years evolution of COM. Three main parts of .Net are execution environment, standard libraries and supported languages. CLR is the native-code execution environment based on COM .Net Framework has a big set of standard libraries (implemented using managed and native code) that can be used from all .Net languages. There are .Net classes that allow using OS APIs. WPF or Silverlight provide a XAML-based UI framework .Net can be used with C++, C#, Javascript, Python, Ruby, VB, LISP, Scheme and many other languages. C++/.Net is a variation of the C++ language that allows interaction with .Net objects. .Net supports inheritance, generics, operator and method overloading and many other features. .Net allows creating apps that run on Windows (XP, 7, 8 Pro (Desktop and Metro), RT, CE, etc), Mac OS, Linux (+ other *nix); iOS, Android, Windows Phone (7, 8); Internet Explorer, Chrome, Firefox; XBox 360, Playstation Suite; raw microprocessors. There is support for creating games (2D/3D) using any managed language or C++. Created by Developer Division WinRT WinRT is based on COM. Three main parts of WinRT are execution environment, standard libraries and supported languages. WinRT has a native-code execution environment based on COM WinRT has a set of standard libraries that more or less can be used from WinRT languages. There are WinRT classes that allow using OS APIs. Unnamed Silverlight clone provides a XAML-based UI framework WinRT can be used with C++, C#, Javascript, VB. C++/CX is a variation of the C++ language that allows interaction with WinRT objects. Custom WinRT components don't support inheritance (classes must be sealed), generics, operator overloading and many other features. WinRT allows creating apps that run on Windows 8 Pro and RT (Metro only); Windows Phone 8 (limited). There is support for creating games (2D/3D) using C++ only. Ordered by Windows Team I think that all the aspects except the last ones are very important for developers. On the other hand it seems that the most important aspect for Microsoft is the last one. So, given the above comparison of conceptually identical technologies, what are incentives (if any) to use WinRT instead of .Net?

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • Specifying a Postfix Instance to send outbound email

    - by Catherine Jefferson
    I have a CentOS 6.5 server running Postfix 2.6x (the default distribution) with five public IPv4 IPs bound to it. Each IP has DNS and rDNS set separately. Each uses a different hostname at a different domain. I have five Postfix instances, one bound to each IP, like this example: 192.168.34.104 red.example.com /etc/postfix 192.168.36.48 green.example.net /etc/postfix-green 192.168.36.49 pink.example.org /etc/postfix-pink 192.168.36.50 orange.example.info /etc/postfix-orange 192.168.36.51 blue.example.us /etc/postfix-blue I've tested each IP by telneting to port 25. Postfix answers and banners properly with the correct hostname. Email is received on all of these instances with no problems and is routed to the correct place. This setup, minus the final instance, has existed for a couple of years and works. I never bothered to set up outbound email to go through any but the main instance, however; there was no need. Now I need to send email from blue.example.us that actually leaves from that interface and IP, such that the Received headers show blue.example.us as the sending mailhost, so that SPF and DKIM validate, etc etc. The email that will be sent from blue.example.com is a feedback loop sent by a single shell account on the server (account5), an account that is dedicated to sending this email. The account receives the feedback loop emails from servers on other networks, saves the bodies of those emails, and then generates a new outbound email header, appends the saved body, and sends the email. It's sending by piping each email to sendmail -oi -t. We're doing it this way to mask the identities of the initial servers. The procmail script that processes these emails works correctly. However, I cannot configure this account to send email through the proper Postfix instance/IP/interface. The exact same account and script sends email through the main Postfix instance /etc/postfix without any issues. When I change MAIL_CONFIG to point to /etc/postfix-blue in either .bash_profile or the Procmail script that handles this email, though, I get this error: sendmail: fatal: User account5(###) is not allowed to submit mail I've read the manuals on Postfix.org, searched Google, and tried the suggestions in three previous answers here on ServerFault.com: Postfix - specify interface to deliver outbound mail on Postfix user is not allowed to submit mail Postfix rejects php mails I have been careful to stop and restart Postfix after each configuration change, and tested the results. Nothing has worked. The main postfix instance happily accepts outbound email from account5. The postfix-blue instance continues to reject email from account5 with the sendmail error above. As tempting as it is to blame machine hostility, I know that I must be missing something or doing something wrong. Does anybody have any suggestions as to what it might be? Please feel free to ask for further information about my setup if you need it. =-=-=-=-=-=-=-=-=-= At the request of the responder, here are main.cf and master.cf for a) the main postfix instance ("red.example.com") and b) the FBL instance ("blue.example.us") [NOTE: All parameters not specified below were left at the default Postfix 2.6 settings] MAIN: master.cf smtp inet n - n - - smtpd main.cf myhostname = red.example.com mydomain = example.com inet_interfaces = $myhostname, localhost inet_protocols = all lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname, localhost.$mydomain, localhost local_recipient_maps = mynetworks = 192.168.34.104/32 relay_domains = example.com, example.info, example.net, example.org, example.us relayhost = [192.168.34.102] # Separate physical server, main mailserver. relay_recipient_maps = hash:/etc/postfix/relay_recipients alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases smtpd_banner = $myhostname ESMTP $mail_name multi_instance_wrapper = ${command_directory}/postmulti -p -- multi_instance_enable = yes multi_instance_directories = /etc/postfix-green /etc/postfix-pink /etc/postfix-orange /etc/postfix-blue FBL: master.cf 184.173.119.103:25 inet n - n - - smtpd main.cf myhostname = blue.example.us mydomain = blue.example.us <= Deliberately set to subdomain only. myorigin = $mydomain inet_interfaces = $myhostname lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname local_recipient_maps = unix:passwd.byname $alias_maps $virtual_alias_maps mynetworks = 192.168.36.51/32, 192.168.35.20/31 <= Second IP is backup MX servers relay_domains = $mydestination recipient_canonical_maps = hash:/etc/postfix-blue/canonical virtual_alias_maps = hash:/etc/postfix-fbl/virtual alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical mailbox_command = /usr/bin/procmail -a "$EXTENSION" DEFAULT=$HOME/Mail/ MAILDIR=$HOME/Mail smtpd_banner = $myhostname ESMTP $mail_name authorized_submit_users = multi_instance_name = postfix-blue multi_instance_enable = yes

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • Varnish server in front of nginx server with multiple virtualhosts

    - by Garreth 00
    I have tried to search for a solution for this, but can't find any documentation/tips on my specific setup. My setup: Backendserver: ngnix: 2 different websites (2 top domains) in virtualenv, running gunicorn/python/django Backendserver hardware(VPS) 2gb ram, 8 CPU Databaseserver: postgresql - pg_bouncer Backendserver hardware (VPS) 1gb ram, 8 CPU Varnishserver: only running varnish Varnishserver hardware (VPS) 1gb ram, 8 CPU I'm trying to set up a varnish server to handle rare spike in traffic (20 000 unique req/s) The spike happens when a tv program mention one of the sites. What do I need to do, to make the varnish server cache both sites/domains on my backendserver? My /etc/varnish/default.vcl : backend django_backend { .host = "local.backendserver.com"; .port = "8080"; } My /usr/local/nginx/site-avaible/domain1.com upstream gunicorn_domain1 { server unix:/home/<USER>/.virtualenvs/<DOMAIN1>/<APP1>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain1.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80 default_server; listen 8080; client_max_body_size 4G; server_name www.domain1.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain1; break; } } } My /usr/local/nginx/site-avaible/domain2.com upstream gunicorn_domain2 { server unix:/home/<USER>/.virtualenvs/<DOMAIN2>/<APP2>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain2.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80; listen 8080; client_max_body_size 4G; server_name www.domain2.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain2; break; } } } Right now, If I try the Ip of the varnishserver I only get served domain1.com. Would everything be correct if I change the DNS of the two domain to point to the varnishserver, or is there extra setup before it would work? Question 2: Do I need a dedicated server for varnish, or could I just install varnish on my backendserver, or would the server run out of memory quick?

    Read the article

  • 404 when doing safe-upgrade in lucid 64 box?

    - by Millisami
    Why I see 404 when doing sudo aptitude safe-upgrade in my lucid 64 box? deploy@li167-251:~$ sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-threaded-dev apache2-utils apache2.2-bin apache2.2-common apt apt-utils base-files binutils bzip2 dpkg dpkg-dev gzip ifupdown krb5-multidev language-pack-en language-pack-en-base language-selector-common libatk1.0-0 libatk1.0-dev libavahi-client3 libavahi-common-data libavahi-common3 libbz2-1.0 libc-bin libc-dev-bin libc6 libc6-dev libc6-i686 libcups2 libfreetype6 libfreetype6-dev libglib2.0-0 libglib2.0-dev libgssapi-krb5-2 libgssrpc4 libgtk2.0-0 libgtk2.0-common libgtk2.0-dev libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev libmysqlclient-dev libmysqlclient16 libnotify-dev libnotify1 libpam-modules libpam-runtime libpam0g libparted0debian1 libpng12-0 libpng12-dev libpq-dev libpq5 libssl-dev libssl0.9.8 libtiff4 libudev0 libusb-0.1-4 linux-libc-dev mountall mysql-client mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openssh-client openssh-server openssl parted python-apt sudo tzdata udev upstart ureadahead wget xulrunner-1.9.2 xulrunner-1.9.2-dev The following packages are RECOMMENDED but will NOT be installed: colibri debhelper fakeroot hicolor-icon-theme libatk1.0-data libglib2.0-data libgtk2.0-bin libhtml-template-perl manpages-dev notification-daemon notify-osd ssl-cert xauth xfce4-notifyd 88 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 85.8MB of archives. After unpacking 1712kB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Get:1 http://security.ubuntu.com/ubuntu/ lucid-updates/main libpam-modules 1.1.1-2ubuntu5 [358kB] Get:2 http://security.ubuntu.com/ubuntu/ lucid-updates/main base-files 5.0.0ubuntu20.10.04.2 [70.2kB] Get:3 http://security.ubuntu.com/ubuntu/ lucid-updates/main gzip 1.3.12-9ubuntu1.1 [102kB] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc-bin 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6-i686 2.11.1-0ubuntu7.2 .........

    Read the article

  • Sycronizing/deploying scripts across several systems

    - by otto
    I have a few time consuming tasks that I like to spread across several computers. These tasks require running an identical ruby or python script (or series of scripts that call each other) on each machine. The machines will a separate config file telling the script what portion of the task to complete. I want to figure out the best way to syncronize the scripts on these machines prior to running them. Up until now, I have been making changes to a copy of the script on a network share and then copying a fresh copy to each machine when I want to run it. But this is cumbersome and leaves a chance for error ( e.g missing a file on the copy or not clicking "copy and replace"). Lets assume the systems are standard windows machines that are not dedicated to this task and I don't need to run these scripts all the time (so I don't want a solution that runs 24/7 and always keeps them up to date, I'd prefer something that pushes/pulls on command). My thoughts on various options: Simple adaptation of my current workflow: Keep the originals on the network drive, but write a batch file that copies over the latest version of the scripts so everything is a one-click operation. Requires action on each system, but that's not the end of the world (since each one usually needs their configuration file changed slightly too). Put everything in a Mercurial/Git reposotory and pull a fresh copy onto each node. Going straight to the repo from each machine would guarantee a current version (and would have the fringe benefit of allowing edits to the script to be made from any machine). Cons would be that it requires VCS to be installed on each machine and there might be some pains dealing with authentication since I wouldn't use a public repo. Open up write access on a shared folder and write a script to use rsync (or similar) to push the changes out to all of the machines at once. This gets a current version on every machine (though you would have to change the script if you want to omit a machine or add a new one). Possible issue would be that each computer has to allow write access. Dropbox is a reasonable suggestion (and could work well) but I dont want to use an external service and I'd prefer not to have to have dropbox running 24/7 on systems that would normally not need it. Is there something simple that I am missing? Some tool designed expressly for doing this kind of thing? Otherwise I am leaning toward just tying all of the systems into Mercurial since, while it requires extra software, it is a little more robust than writing a batch file (e.g. if I split part of a script into a separate module, Mercurial will know what to do whereas I would have to add a line to the batch file).

    Read the article

  • VPS 512 MB RAM with WordPressMU comes to consumes lots of memory

    - by CAPitalZ
    I have googled for days and gathered all optimization suggestions and tried. My sites are not getting any high hits. May be like 100 hits per day [all my sites combined]. Here are my specs I have 512 MB RAM VPS with burstable 1024 MB. Centos 5 32-bit & cPanel/WHM Apache 2.2 MySQL 5.0 PHP 5.3.2 Here is my Configs I have 2 WordPressMU production sites, and 1 test site my.cnf # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/lib/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking skip-bdb skip-innodb key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M #CAPitalZ thread_cache_size=8 thread_concurrency=4 #query_cache_type=1 #query_cache_limit=1M query_cache_size=16M concurrent_insert=2 low_priority_updates=1 max_connections=50 tmp_table_size=16M max_heap_table_size=16M join_buffer_size=1M interactive_timeout=25 wait_timeout=1000 #connect_timout=10 not able to restart mysql max_connect_errors=10 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 [mysqld_safe] open_files_limit=8192 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout httpd.conf I have unselected many modules and recompiled using EasyApache in WHM. Only have the following modules built Deflate Expires Fileprotect Imagemap MPM Prefork Version [default] EAccelerator for PHP Bcmath Calendar CurlSSL [I'm using Curl. But I don't have any https sites] Expat GD [for image cropping] Gettext Imap Mbregex [default] Mbstring [need both Mbregex and Mbstring for utf-8] Mysql of the system MySQL "Improved" extension. Sockets TTF (FreeType) [I'm using custom font] Zlib Under Global Configuration I only have FollowSymLinks enabled I Have TraceEnable, ServerSignature, FileETag OFF ServerTokens ProductOnly DirectoryIndex Priority has index.php as the first one I have removed Clamd [Clam Anti-virus] SpamAssasin is Off Under Tweak Settings Default catch-all/default address behavior for new accounts. This is set to "fail" All stats programs turned off I have eAccelerator installed and checked in phpinfo and its working [Pre VirtualHost Include under WHM] Timeout 20 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 3 MinSpareServers 1 MaxSpareServers 3 StartServers 1 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 ExtendedStatus Off #ServerType standalone this throws error HostnameLookups Off <Directory "/"> AllowOverride None </Directory> My sites will take ages to load and WHM/CPanel will not even load. adadaa.com/ http://adadaa.net/ kadais.ca/ My average memory consumption is like 1000 MB! [yes always bursting] The process that consumes most CPU and also most memory is mysql But I also get like 15 httpd processes [when its bursting] I already got warning from cpuwatchcheck saying "While processing, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 07:00:37 up 11:30, 0 users, load average: 14.64, 16.79, 20.07" I don't know, I have tried switching these config values many different times, but nothing seems to work. Please show some light... Thanks

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • Vim digraphs is not working

    - by Hauleth
    I've tried to input digraphs in Vim (without vi compatibility), and unfortunately I can't. After using ControlKP * it should input big greek pi letter. I've also tried using PBS * with :set digraph, but no success either. My gvim --version output: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 23 2012 18:42:18) Included patches: 1-712 Compiled by ArchLinux Big version with GTK2 GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs +dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() +gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +lua +menu +mksession +modify_fname +mouse +mouseshape +mouse_dec +mouse_gpm -mouse_jsbterm +mouse_netterm +mouse_sgr -mouse_sysmouse +mouse_urxvt +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra +perl +persistent_undo +postscript +printer -profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup +X11 -xfontset +xim +xsmp_interact +xterm_clipboard -xterm_save system vimrc file: "/etc/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "/etc/gvimrc" user gvimrc file: "$HOME/.gvimrc" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -pthread -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng15 -I/usr/local/include -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -L. -Wl,-O1,--sort-common,--as-needed,-z,relro -rdynamic -Wl,-export-dynamic -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -L/usr/local/lib -Wl,--as-needed -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lpangocairo-1.0 -lgdk_pixbuf-2.0 -lcairo -lpango-1.0 -lfreetype -lfontconfig -lgobject-2.0 -lglib-2.0 -lSM -lICE -lXt -lX11 -lXdmcp -lSM -lICE -lm -lncurses -lnsl -lacl -lattr -lgpm -ldl -L/usr/lib -llua -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -fstack-protector -L/usr/local/lib -L/usr/lib/perl5/core_perl/CORE -lperl -lnsl -ldl -lm -lcrypt -lutil -lpthread -lc -L/usr/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -lruby -lpthread -lrt -ldl -lcrypt -lm -L/usr/lib Can you tell me how to get my ControlK digraphs work? EDIT: Output of :verbose set enc? fenc? is: encoding=utf-8 Last set from ~/.vimrc fileencoding=

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

  • Removing file with strange characters in filename in OS X

    - by SiggyF
    After a memory error in my program, I am stuck with a file with a strange filename. It's proving quite resistant to all normal methods to remove files with strange names. The filename is: %8BUȅ҉%95d%F8%FF%FF\x0f%8E%8F%FD%FF%FF%8B%B5T%F8%FF%FF%8B%85\%F8%FF%FF\x03%85x%F8%FF%FF%8B%95D%F8%FF%FF%8B%BD%9C%F8%FF%FF%8D\x04%86%8B%B5@%F8%FF%FF%89%85%90%F8%FF%FF%8B%85X%F8%FF%FF\x03%85%9C%F8%FF%FF%C1%E7\x02%8B%8Dx I tried the following: rm * - "No such file or directory" rm -- filename - "No such file or directory" rm "filename" - "No such file or directory" ls -i to get the inode number - "No such file or directory" stat filename - "No such file or directory" zip the directory where the file is in - error occured while adding "" to the archive. delete directory in finder - error -43 in python: os.unlink(os.listdir(u'.')[0]) - OSError No such file or directory find . -type f -exec rm {} \; - "No such file or directory" checked for locks on the file with lsof - no locks All these attempts result in a file (long filename here) not found error, or error -43. Even the ls -i. I couldn't find anymore options, so before reformatting or repairing my filesystem (fsck might help) I thought maybe there is something I missed. I wrote this small c program to get the inode: #include <stdio.h> #include <stddef.h> #include <sys/types.h> int main(void) { DIR *dp; struct dirent *ep; dp = opendir ("./"); if (dp != NULL) { while (ep = readdir (dp)) { printf("d_ino=%ld, ", (unsigned long) ep->d_ino); printf("d_name=%s.\n", ep->d_name); } (void) closedir (dp); } else perror ("Couldn't open the directory"); return 0; } That works. I now have the inode, but the normal find -inum inode -exec rm '{}' \; doesn't work. I think I have to use the clri now.

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • <msbuild/> task fails while <devenv/> succeeds for MFC application in CruiseControl.NET?

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (<devenv/>) works, but a simple MSBuild wrapper script (see below) run via the CCNet <msbuild/> task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors The Simple MSBuild Wrapper <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Build"> <!-- Doing some versioning stuff here--> <MSBuild Projects="target.sln" Properties="Configuration=ReleaseUnicode;Platform=Any CPU;..." /> </Target> </Project> My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. Given the vs2010 msbuild support for visual c++ projects (.vcxproj), shouldn't msbuilding a solution be pretty close to compiling via visual studio? But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level. Update 1 This appears to only ever happen in two cases: resource compilation (rc.exe), and precompiled header (stdafx.h) compilation, and only for certain projects? I was thinking it was across the board, but indeed it appears only to be in these cases. I guess I will keep digging and hope someone has some insight they would be willing to share...

    Read the article

  • Html.EditorFor not updating model on post

    - by Dave
    I have a complex type composed of two nullable DateTimes: public class Period { public DateTime? Start { get; set; } public DateTime? End { get; set; } public static implicit operator string(Period period) { /* converts from Period to string */ } public static implicit operator Period(string value) { /* and back again */ } } I want to display them together in a single textbox as a date range so I can provide a nice jQuery UI date range selector. To make that happen have the following custom editor template: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Period>" %> <% string name = ViewData.TemplateInfo.HtmlFieldPrefix; %> <%= Html.PeriodTextBox(name, Model.EarliestDate, Model.LatestDate) %> Where Html.PeriodTextBox is an extension method I've written that just concatenates the two dates sensibly, turns off autocomplete and generates a textbox, like so: public static MvcHelperString PeriodTextBox(this HtmlHelper helper, string name, DateTime? startDate, DateTime? endDate) { TagBuilder builder = new TagBuilder("input"); builder.GenerateId(name); builder.Attributes.Add("name", name); builder.Attributes.Add("type", "text"); builder.Attributes.Add("autocomplete", "off"); builder.Attributes.Add("value", ConcatDates(startDate, endDate)); return MvcHtmlString.Create(builder.ToString()); } That's working fine in that I can call <%= Html.EditorFor(m => m.ReportPeriod) %> and I get my textbox, then when the form is submitted the FormCollection passed to the post action will contain an entry named ReportPeriod with the correct value. [HttpPost] public ActionResult ReportByRange(FormCollection formValues) { Period reportPeriod = formValues["ReportPeriod"]; // creates a Period, with the expected values } The problem is if I replace the FormCollection with the model type I'm passing to the view then the ReportPeriod property never gets set. [HttpPost] public ActionResult ReportByRange(ReportViewModel viewModel) { Period reportPeriod = viewModel.ReportPeriod; // this is null } I expected MVC would try to set the string from the textbox to that property and it would automatically generate a Period (as in my FormCollection example), but it's not. How do I tell the textbox I've generated in the custom editor to poplate that property on the model?

    Read the article

  • ActiveDirectoryMembershipProvider and ADAM (or AD LDS) and SetPassword

    - by Iulian
    By the subject line it seems to be a rather broad subject and I need some help here. Basically what I want is to use ActiveDirectoryMembershipProvider with an ADAM instance to authenticate users in an ASP.NET web application. My development environment is a windows 7 machine with an AD LDS instance on it whilst the QA server is a Windows 2003 server with an ADAM instance on it. I have all the required users on both instances plus one with adminsitrator role (CN=Admin,CN=xxx,DC=xxx,C=xx) which I want to use as the connection user. Using connectonProtecton="None" connectionUsername="CN=Admin,CN=xxx,DC=xxx,C=xx" connectionPassword="xxx" I am able to authenticate on both environments (dev & qa). If I change to the connectionProtection to "Secure" I am not able to authenticate anymore; the error I get is "Parser Error Message: Unable to establish secure connection with the server" To me it sounds wrong to use connectionProtection="None" although I found on the net a lot of samples using this setting. Can I use connectionProtection="Secure" to connect to an ADAM instance using an account defined on that instance having Administrator role? What other choices do I have (like using an domain account)? What if my machine where I am to deploy the application is not a part of the domain, will this affect in any way the behavior? I am novice in the respect so I would really appreciate some clear answers or some directions as where to look? Now beside the "signing in" feature of the ActiveDirectoryMembershipProvider I also want to add an extra one, which is setting the password without knowing the old one (something that will be used by a "reset password" feature). So I added a couple of extension methods to the provider, and used System.DirectoryServices classes like DirectoryEntry and the like. When creating a directory entry I use the same credentials provided in web.config for the provider minus the AuthenticationType as I don't know what is right combination of the flags that corresponds to None/Secure. I am able to use Invoke "SetPassword" with ADS_OPTION_PASSWORD_METHOD option as ADS_PASSWORD_ENCODE_CLEAR on my dev machine (w/ AD LDS instance); nevertheless on qa environment (w/ ADAM instance) I am getting an error like "Exception Details: System.DirectoryServices.DirectoryServicesCOMException: An operations error occurred. (Exception from HRESULT: 0x80072020)" I am quite sure it is not about AD LDS vs ADAM but probably another configuration / permission issue. So can anyone help me with some hints on how to use this SetPassword feature? And as a general question what are the best practices when it comes to using ADAM regarding security, programming etc? Thanks in advance Iulian

    Read the article

  • Concatenate & Minify JS on the fly OR at build time - ASP.NET MVC

    - by Charlino
    As an extension to this question here Linking JavaScript Libraries in User Controls I was after some examples of how people are concatinating & minifying javascript on the fly OR at build time. I would also like to see how it then works into your master pages. I don't mind page specific files being minified and linked inidividually as they currently are (see below) but all the js files on the main master page (I have about 5 or 6) I would like concatenated and minified. Bonus points for anyone who also incorporates CSS concatenation & minification! :-) Current master page with the common js files that I would like concatenated & minified: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <head runat="server"> ... BLAH ... <asp:ContentPlaceHolder ID="AdditionalHead" runat="server" /> ... BLAH ... <%= Html.CSSBlock("/styles/site.css") %> <%= Html.CSSBlock("/styles/jquery-ui-1.7.1.css") %> <%= Html.CSSBlock("/styles/jquery.lightbox-0.5.css") %> <%= Html.CSSBlock("/styles/ie6.css", 6) %> <%= Html.CSSBlock("/styles/ie7.css", 7) %> <asp:ContentPlaceHolder ID="AdditionalCSS" runat="server" /> </head> <body> ... BLAH ... <%= Html.JSBlock("/scripts/jquery-1.3.2.js", "/scripts/jquery-1.3.2.min.js") %> <%= Html.JSBlock("/scripts/jquery-ui-1.7.1.js", "/scripts/jquery-ui-1.7.1.min.js") %> <%= Html.JSBlock("/scripts/jquery.validate.js", "/scripts/jquery.validate.min.js") %> <%= Html.JSBlock("/scripts/jquery.lightbox-0.5.js", "/scripts/jquery.lightbox-0.5.min.js") %> <%= Html.JSBlock("/scripts/global.js", "/scripts/global.min.js") %> <asp:ContentPlaceHolder ID="AdditionalJS" runat="server" /> </body> Used in a page like this (which I'm happy with): <asp:Content ID="signUpContent" ContentPlaceHolderID="AdditionalJS" runat="server"> <%= Html.JSBlock("/scripts/pages/account.signup.js", "/scripts/pages/account.signup.min.js") %> </asp:Content> EDIT: What I'm using now Since asking this question, Microsoft have released their own JS & CSS compression library called Microsoft AJAX Minifier, I'd definitely recommend checking it out. It includes MSBuild tasks which are the duck's nuts.

    Read the article

  • Jboss Seam: Enabling Debug page on WebLogic 10.3.2 (11g)

    - by Markos Fragkakis
    Hi all, SKIP TO UPDATE 3 I want to enable the Seam debug page on Weblogic 10.3.2 (11g). So, I have done the following: I have the jboss-seam and jboss-seam-debug jars as dependency in both my ejb and web maven projects (both are modules of my superproject) I put this context parameter in my web.xml: <context-param> <param-name>org.jboss.seam.core.init.debug</param-name> <param-value>true</param-value> </context-param> Now, when I hit the URL of my application, I get the debug page with this exception (full stacktrace at the end of the post): Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" From posts I read it seems that this is somehow related to two jars of jboss-seam or jboss-seam-debug being in the classpath. I opened my ear file and only one of each is present (in the ear) whereas the war itself has no libraries in the WEB-INF/lib. I have also read of another way to initialize debug page using the components.xml. I also tried to include the following components.xml in the WEB-INF, but it didn't work either: <?xml version="1.0" encoding="UTF-8"?> <components xmlns="http://jboss.com/products/seam/components" xmlns:core="http://jboss.com/products/seam/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://jboss.com/products/seam/core http://jboss.com/products/seam/core-2.2.xsd http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.2.xsd"> <core:init debug="true"/> </components> Any suggestions on what to do to enable the debug page correctly? Cheers! Full stacktrace: org.jboss.seam.contexts.PageContext.getPhaseId(PageContext.java:163) org.jboss.seam.contexts.PageContext.isBeforeInvokeApplicationPhase(PageContext.java:175) org.jboss.seam.contexts.PageContext.getCurrentWritableMap(PageContext.java:91) org.jboss.seam.contexts.PageContext.remove(PageContext.java:105) org.jboss.seam.Component.newInstance(Component.java:2141) org.jboss.seam.Component.getInstance(Component.java:2021) org.jboss.seam.Component.getInstance(Component.java:2000) org.jboss.seam.Component.getInstance(Component.java:1994) org.jboss.seam.Component.getInstance(Component.java:1967) org.jboss.seam.Component.getInstance(Component.java:1962) org.jboss.seam.faces.FacesPage.instance(FacesPage.java:92) org.jboss.seam.core.ConversationPropagation.restorePageContextConversationId(ConversationPropagation.java:84) org.jboss.seam.core.ConversationPropagation.restoreConversationId(ConversationPropagation.java:57) org.jboss.seam.jsf.SeamPhaseListener.afterRestoreView(SeamPhaseListener.java:391) org.jboss.seam.jsf.SeamPhaseListener.afterServletPhase(SeamPhaseListener.java:230) org.jboss.seam.jsf.SeamPhaseListener.afterPhase(SeamPhaseListener.java:196) com.sun.faces.lifecycle.Phase.handleAfterPhase(Phase.java:175) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:114) com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:104) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:265) weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) weblogic.work.ExecuteThread.run(ExecuteThread.java:173) UPDATE 1: Now the debug page does not appear at all. When I ask for http://localhost/myapp/debug.xhtml I get a page with: myapp/debug.xhtml the same as any page that does not exist. I opened the .ear and the following jboss jars are in: jboss-seam-debug-2.2.0.GA.jar jboss-el-1.0_02.CR4.jar jboss-seam-2.2.0.GA.jar My current configuration: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org /2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <display-name>PRS 6.0</display-name> <session-config> <session-timeout>30</session-timeout> </session-config> <!-- The default behavior of JSF is to map the incoming request for a JSF view identifier (view ID for short) to a JSP file with the file extension .jsp. To get JSF to look for a Facelets template instead, we must register the .xhtml extension as the default suffix for JSF views --> <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> <context-param> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> <context-param> <param-name>javax.faces.CONFIG_FILES</param-name> <param-value> /WEB-INF/faces-config/application.xml </param-value> </context-param> <context-param> <param-name>facelets.REFRESH_PERIOD</param-name> <param-value>2</param-value> </context-param> <context-param> <param-name>facelets.DEVELOPMENT</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>facelets.SKIP_COMMENTS</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>com.sun.faces.verifyObjects</param-name> <param-value>false</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.VIEW_HANDLERS</param-name> <param-value>com.sun.facelets.FaceletViewHandler</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.COMPRESS_SCRIPT</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.COMPRESS_STYLE</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.xmlparser.ORDER</param-name> <param-value>NONE</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <context-param> <param-name>org.richfaces.LoadStyleStrategy</param-name> <param-value>ALL</param-value> </context-param> <context-param> <param-name>org.richfaces.LoadScriptStrategy</param-name> <param-value>ALL</param-value> </context-param> <!-- Seam Filter --> <!-- (MUST BE FIRST)--> <filter> <filter-name>Seam Filter</filter-name> <filter-class>org.jboss.seam.servlet.SeamFilter</filter-class> </filter> <filter-mapping> <filter-name>Seam Filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- RichFaces filter --> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> <init-param> <description>Set the size limit for uploaded files as attachments in bytes. (max 5MB)</description> <param-name>maxRequestSize</param-name> <param-value>5242880</param-value> </init-param> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <listener> <listener-class> XX.XXXX.XXX.prs.web.listeners.ResourceInitializationListener</listener-class> </listener> <listener> <listener-class>com.sun.faces.config.ConfigureListener</listener-class> </listener> <listener> <listener-class>XX.XXXX.XXX.prs.web.listeners.EJBInjectionListener</listener-class> </listener> <!-- Seam Listener--> <listener> <listener-class>org.jboss.seam.servlet.SeamListener</listener-class> </listener> <!-- Faces Servlet --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> -- Seam Resource Servlet-- org.jboss.seam.servlet.SeamResourceServlet-- -- -- Seam Resource Servlet-- /seam/resource/*-- -- <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> </web-app> faces.config <?xml version="1.0" encoding="UTF-8"?> <faces-config xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-facesconfig_1_2.xsd" version="1.2"> <application> <locale-config> <default-locale>en</default-locale> <supported-locale>en</supported-locale> </locale-config> <view-handler>com.sun.facelets.FaceletViewHandler</view-handler> <el-resolver>org.jboss.seam.el.SeamELResolver</el-resolver> <resource-bundle> <base-name>XX.XXXX.XXX.prs.web.messages.messages</base-name> <var>msgs</var> </resource-bundle> <resource-bundle> <base-name>XX.XXXX.XXX.prs.web.messages.validation</base-name> <var>val</var> </resource-bundle> </application> <lifecycle> <phase-listener>XX.XXXX.XXX.prs.web.listeners.SetFocusListener</phase-listener> </lifecycle> <!-- <lifecycle>--> <!-- <phase-listener>XX.XXXX.XXX.prs.web.listeners.DebugPhaseListener</phase-listener> --> <converter> <converter-for-class>XX.XXXX.XXX.prs.model.Applicant</converter-for-class> <converter-class> XX.XXXX.XXX.prs.web.common.converters.ApplicantConverter</converter-class> </converter> <validator> <validator-id>EmailValidator</validator-id> <validator-class>XX.XXXX.XXX.prs.web.common.validators.EmailValidator</validator-class> </validator> </faces-config> components.xml <?xml version="1.0" encoding="UTF-8"?> <components xmlns="http://jboss.com/products/seam/components" xmlns:core="http://jboss.com/products/seam/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://jboss.com/products/seam/core http://jboss.com/products/seam/core-2.2.xsd http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.2.xsd"> <core:init debug="true" /> <core:manager concurrent-request-timeout="500" conversation-timeout="1200000" conversation-id-parameter="cid" parent-conversation-id-parameter="pid" /> UPDATE 2: These guys here have the same problem. They have an outer EAR project containing the inner WAR project. They discuss that this may be related to how jars end up in the projects. I use Maven, and I had set it to create "Skinny Wars", that is excluding all jar dependencies from the inner WAR project, so that it remains small in size. All its dependencies are contained in the EAR and are used by all other modules. I changed the settings of the maven-war-plugin to leave inside the war the web-specific jars (the ones mentioned in the link, like RichFaces, jboss-seam-debug, Facelets etc). However, the problem has reverted to its previous form. I now get the debug page, whatever link I press, with the initial exception. Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" UPDATE 3: The structure of the application .ear is the following: application.ear |--> APP-INF | |--> classes |--> lib (all jar dependencies go here, including the WAR dependencies, EJB module dependencies) |-->META_INF | |--> application.xml | |--> data-sources.xml | |--> MANIFEST.MF | |--> weblogic.xml | |--> weblogic-application.xml |--> jboss-seam-2.2.0.GA.jar |--> myEjbModule1.jar |--> myEjbModule2.jar |--> myEjbModule3.jar |--> myEjbModule4.jar |--> myWar.war (NO libraries in WEB-INF/lib, finds everything in EAR/lib) When deploying .ear application WITHOUT debug enabled (not including the jboss-seam-debug.jar in the ear), the application is loaded correctly. When deploying WITH jboss-seam-debug.jar in the EAR (EAR/lib directory), the application does not appear, but ONLY the debug page with the following exception (stacktrace at the end): Exception during request processing: Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" When "JBoss-izing" the same EAR (remove hibernate jars, which are provided by JBoss, and move all libraries from EAR/lib to EAR root), the application loads correctly. BOTH the application AND the debug page appear correctly. Full stacktrace: org.jboss.seam.contexts.PageContext.getPhaseId(PageContext.java:163) org.jboss.seam.contexts.PageContext.isBeforeInvokeApplicationPhase(PageContext.java:175) org.jboss.seam.contexts.PageContext.getCurrentWritableMap(PageContext.java:91) org.jboss.seam.contexts.PageContext.remove(PageContext.java:105) org.jboss.seam.Component.newInstance(Component.java:2141) org.jboss.seam.Component.getInstance(Component.java:2021) org.jboss.seam.Component.getInstance(Component.java:2000) org.jboss.seam.Component.getInstance(Component.java:1994) org.jboss.seam.Component.getInstance(Component.java:1967) org.jboss.seam.Component.getInstance(Component.java:1962) org.jboss.seam.faces.FacesPage.instance(FacesPage.java:92) org.jboss.seam.core.ConversationPropagation.restorePageContextConversationId(ConversationPropagation.java:84) org.jboss.seam.core.ConversationPropagation.restoreConversationId(ConversationPropagation.java:57) org.jboss.seam.jsf.SeamPhaseListener.afterRestoreView(SeamPhaseListener.java:391) org.jboss.seam.jsf.SeamPhaseListener.afterServletPhase(SeamPhaseListener.java:230) org.jboss.seam.jsf.SeamPhaseListener.afterPhase(SeamPhaseListener.java:196) com.sun.faces.lifecycle.Phase.handleAfterPhase(Phase.java:175) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:114) com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:104) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:265) weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:530) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) org.jboss.seam.web.Ajax4jsfFilter.doFilter(Ajax4jsfFilter.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

    Read the article

  • Mirth Transformer Error

    - by Ryan H
    I'm getting the following error when trying to convert HL7v3 to HL7v2 The message passed in is: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"> <S:Body> <PRPA_IN201306UV02 xmlns="urn:hl7-org:v3" xmlns:ns2="urn:gov:hhs:fha:nhinc:common:nhinccommon" xmlns:ns3="urn:gov:hhs:fha:nhinc:common:patientcorrelationfacade" xmlns:ns4="http://schemas.xmlsoap.org/ws/2004/08/addressing" ITSVersion="XML_1.0"> <id extension="4ae5403:12752e71a17:-7b52" root="1.1.1"/> ... </PRPA_IN201306UV02> </S:Body> </S:Envelope> The error I get is: ERROR-300: Transformer error ERROR MESSAGE: Error evaluating transformer com.webreach.mirth.server.MirthJavascriptTransformerException: CHANNEL: v3v2ConversionResponseMessage CONNECTOR: sourceConnector SCRIPT SOURCE: LINE NUMBER: 5 DETAILS: TypeError: The prefix "S" for element "S:Envelope" is not bound. at com.webreach.mirth.server.mule.transformers.JavaScriptTransformer.evaluateScript(JavaScriptTransformer.java:460) at com.webreach.mirth.server.mule.transformers.JavaScriptTransformer.transform(JavaScriptTransformer.java:356) at org.mule.transformers.AbstractEventAwareTransformer.doTransform(AbstractEventAwareTransformer.java:48) at org.mule.transformers.AbstractTransformer.transform(AbstractTransformer.java:197) at org.mule.transformers.AbstractTransformer.transform(AbstractTransformer.java:200) at org.mule.impl.MuleEvent.getTransformedMessage(MuleEvent.java:251) at org.mule.routing.inbound.SelectiveConsumer.isMatch(SelectiveConsumer.java:61) at org.mule.routing.inbound.InboundMessageRouter.route(InboundMessageRouter.java:83) at org.mule.providers.AbstractMessageReceiver$DefaultInternalMessageListener.onMessage(AbstractMessageReceiver.java:493) at org.mule.providers.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:272) at org.mule.providers.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:231) at com.webreach.mirth.connectors.vm.VMMessageReceiver.getMessages(VMMessageReceiver.java:207) at org.mule.providers.TransactedPollingMessageReceiver.poll(TransactedPollingMessageReceiver.java:108) at org.mule.providers.PollingMessageReceiver.run(PollingMessageReceiver.java:90) at org.mule.impl.work.WorkerContext.run(WorkerContext.java:290) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675) at java.lang.Thread.run(Unknown Source) When I remove the S: tag in front of the Envelope and Body and redefine the namespace to default, it gives me a new error "TypeError: The prefix "xsi" for attribute "xsi:nil" associated with an element type "targetMessage" is not bound." referring to <targetMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/> As if mirth can't handle the namespaces being defined on the same line as the first use of that element. Any suggestions would be useful

    Read the article

  • Snow Leopard upgrade -> reinstalling sqlite3-ruby gem problem

    - by Carl Tessler
    Hi all, I got ruby 1.8.7 (native compiled), rails 2.3.4, OSX 10.6.2 and also sqlite3-ruby. The error I'm getting when accessing the rails app is NameError: uninitialized constant SQLite3::Driver::Native::Driver::API History: I upgraded to snow leopard by migrating my apps with a FW-cable from my old macbook to the new one. Everything was running perfectly for months, but Yesterday I needed to install watir, which was dependant on rb-appscript, which didn't build due a "wrong architecture" error in libsqlite3.dylib. I figured the build was made on the old machine, so i wanted to rebuild sqlite3-ruby: $ sudo gem uninstall sqlite3-ruby $ sudo gem install sqlite3-ruby Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for fdatasync() in -lrt... no checking for sqlite3.h... yes checking for sqlite3_open() in -lsqlite3... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. It seems like the sqlite3 libs are not working properly. I've tried to install macports sqlite3 (sudo port install sqlite3) and use that instead, but with same result... so i rebuild sqlite3 from scratch.. download-configure-make-make install. After that, the gem now builds perfectly, but doesn't work in rails, giving the error in the top of this article. I'm not really sure where to go from here because I've tried the following Rebuild sqlite3 from newest source (http://www.sqlite.org/download.html) Reinstalled sqlite3-ruby (sudo gem uninstall sqlite3-ruby && sudo gem install sqlite3-ruby) Used sqlite3 from macports (sudo port install sqlite3 && sudo gem install sqlite3-ruby) Reinstalled rails (sudo gem install rails sqlite3-ruby ) and updated environment.rb to rails 2.3.5. No avail, I still get this error: NameError: uninitialized constant SQLite3::Driver::Native::Driver::AP from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in const_missing' from /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/driver/native/driver.rb:76:in open' from /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/database.rb:76:in `initialize' Btw, I have Xcode installed from the Snow Leopard CD. What can i do to solve the problem?

    Read the article

  • Objective-C Out of scope problem

    - by davbryn
    Hi, I'm having a few problems with some Objective-C and would appreciate some pointers. So I have a class MapFileGroup which has the following simple interface (There are other member variables but they aren't important): @interface MapFileGroup : NSObject { NSMutableArray *mapArray; } @property (nonatomic, retain) NSMutableArray *mapArray; mapArray is @synthesize'd in the .m file. It has an init method: -(MapFileGroup*) init { self = [super init]; if (self) { mapArray = [NSMutableArray arrayWithCapacity: 10]; } return self; } It also has a method for adding a custom object to the array: -(BOOL) addMapFile:(MapFile*) mapfile { if (mapfile == nil) return NO; mapArray addObject:mapfile]; return YES; } The problem I get comes when I want to use this class - obviously due to a misunderstanding of memory management on my part. In my view controller I declare as follows: (in the @interface): MapFileGroup *fullGroupOfMaps; With @property @property (nonatomic, retain) MapFileGroup *fullGroupOfMaps; Then in the .m file I have a function called loadMapData that does the following: MapFileGroup *mapContainer = [[MapFileGroup alloc] init]; // create a predicate that we can use to filter an array // for all strings ending in .png (case insensitive) NSPredicate *caseInsensitivePNGFiles = [NSPredicate predicateWithFormat:@"SELF endswith[c] '.png'"]; mapNames = [unfilteredArray filteredArrayUsingPredicate:caseInsensitivePNGFiles]; [mapNames retain]; NSEnumerator * enumerator = [mapNames objectEnumerator]; NSString * currentFileName; NSString *nameOfMap; MapFile *mapfile; while(currentFileName = [enumerator nextObject]) { nameOfMap = [currentFileName substringToIndex:[currentFileName length]-4]; //strip the extension mapfile = [[MapFile alloc] initWithName:nameOfMap]; [mapfile retain]; // add to array [fullGroupOfMaps addMapFile:mapfile]; } This seems to work ok (Though I can tell I've not got the memory management working properly, I'm still learning Objective-C); however, I have an (IBAction) that interacts with the fullGroupOfMaps later. It calls a method within fullGroupOfMaps, but if I step into the class from that line while debugging, all fullGroupOfMaps's objects are now out of scope and I get a crash. So apologies for the long question and big amount of code, but I guess my main question it: How should I handle a class with an NSMutableArray as an instance variable? What is the proper way of creating objects to be added to the class so that they don't get freed before I'm done with them? Many thanks

    Read the article

  • How to make a Custom Data Generator for SQL XML DataType.

    - by Keith Sirmons
    Howdy, I am using Visual Studio 2010 and am playing around with the Database Projects. I am creating a DataGenerationPlan to insert data into a simple table, in which one of the column datatypes is XML. Out of the box, the generation plan uses the Regular Expression generator and generates something like this : HGcSv9wa7yM44T9x5oFT4pmBkEmv62lJ7OyAmCnL6yqXC2X.......... I am looking at creating a custom data Generator for this data type and have followed this site for the basics: http://msdn.microsoft.com/en-us/library/aa833244.aspx This example works if I am creating a string datatype and using it for a nvarchar datatype. What do I need to change to hook this Generator to the XML Datatype? Below are my code files. The string property works for nvarchar. The XElement property does not work for the xml datatype, and the RecordXMLDataGenerator is not listed as an option in the Generator column for the generation plan. CustomDataGenerators: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Data.Schema.Tools.DataGenerator; using Microsoft.Data.Schema.Extensibility; using Microsoft.Data.Schema; using Microsoft.Data.Schema.Sql; using System.Xml.Linq; namespace CustomDataGenerators { [DatabaseSchemaProviderCompatibility(typeof(SqlDatabaseSchemaProvider))] public class RecordXMLDataGenerator : Generator { private XElement _RecordData; [Output(Description = "Generates string of XML Data for the Record.", Name = "RecordDataString")] public string RecordDataString { get { return _RecordData.ToString(SaveOptions.None); } } [Output(Description = "Generates XML Data for the Record.", Name = "RecordData")] public XElement RecordData { get { return _RecordData; } } protected override void OnGenerateNextValues() { base.OnGenerateNextValues(); XElement element = new XElement("Root", new XElement("Children1", 1), new XElement("Children6", 6) ); _RecordData = element; } } } XML Extensions File: <?xml version="1.0" encoding="utf-8" ?> <extensions assembly="" version="1" xmlns="urn:Microsoft.Data.Schema.Extensions" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:Microsoft.Data.Schema.Extensions Microsoft.Data.Schema.Extensions.xsd"> <extension type="CustomDataGenerators.RecordXMLDataGenerator" assembly="CustomDataGenerators, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxxx" enabled="true"/> </extensions> Table.sql: CREATE TABLE [dbo].[Record] ( id int IDENTITY (1,1) NOT NULL, recordData xml NULL, userId int NULL, test nvarchar(max) NULL, rowver rowversion NULL, CONSTRAINT pk_RecordID PRIMARY KEY (id) )

    Read the article

  • sqlite3-ruby can't make on rvm 1.8.7

    - by Josh Crews
    Upgrading to Rails 3 by starting with RVM 1.8.7. OSX 10.5.8 Output: josh-crewss-macbook:~ joshcrews$ gem install sqlite3-rubyBuilding native extensions. This could take a while...ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/bin/ruby extconf.rb checking for sqlite3.h... yes checking for sqlite3_libversion_number() in -lsqlite3... yes checking for rb_proc_arity()... no checking for sqlite3_column_database_name()... no checking for sqlite3_enable_load_extension()... no checking for sqlite3_load_extension()... no creating Makefile make gcc -I. -I. -I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0 -I. -I/usr/local/include -I/opt/local/include -I/usr/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -g -O2 -fno-common -pipe -fno-common -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -c database.c database.c: In function ‘deallocate’: database.c:17: warning: implicit declaration of function ‘sqlite3_next_stmt’ database.c:17: warning: assignment makes pointer from integer without a cast database.c: In function ‘initialize’: database.c:76: warning: implicit declaration of function ‘sqlite3_open_v2’ database.c:79: error: ‘SQLITE_OPEN_READWRITE’ undeclared (first use in this function) database.c:79: error: (Each undeclared identifier is reported only once database.c:79: error: for each function it appears in.) database.c:79: error: ‘SQLITE_OPEN_CREATE’ undeclared (first use in this function) database.c: In function ‘set_sqlite3_func_result’: database.c:277: error: ‘sqlite3_int64’ undeclared (first use in this function) database.c: In function ‘rb_sqlite3_func’: database.c:311: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype database.c: In function ‘rb_sqlite3_step’: database.c:378: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype make: *** [database.o] Error 1 Gem list (these are under RVM, under system I've got lot more gems included the sqlite3-ruby that's worked for 1.5 years) josh-crewss-macbook:~ joshcrews$ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta3) actionpack (3.0.0.beta3) activemodel (3.0.0.beta3) activerecord (3.0.0.beta3) activeresource (3.0.0.beta3) activesupport (3.0.0.beta3, 2.3.8) arel (0.3.3) builder (2.1.2) bundler (0.9.25) capybara (0.3.8) configuration (1.1.0) cucumber (0.7.2) cucumber-rails (0.3.1) culerity (0.2.10) database_cleaner (0.5.2) diff-lcs (1.1.2) erubis (2.6.5) ffi (0.6.3) gherkin (1.0.30) i18n (0.4.0, 0.3.7) json_pure (1.4.3) launchy (0.3.5) mail (2.2.1) memcache-client (1.8.3) mime-types (1.16) nokogiri (1.4.2) polyglot (0.3.1) rack (1.1.0) rack-mount (0.6.3) rack-test (0.5.4) rails (3.0.0.beta3) railties (3.0.0.beta3) rake (0.8.7) rdoc (2.5.8) rspec (2.0.0.beta.10, 2.0.0.beta.8) rspec-core (2.0.0.beta.10, 2.0.0.beta.8) rspec-expectations (2.0.0.beta.10, 2.0.0.beta.8) rspec-mocks (2.0.0.beta.10, 2.0.0.beta.8) rspec-rails (2.0.0.beta.10, 2.0.0.beta.8) rubygems-update (1.3.7) selenium-webdriver (0.0.20) spork (0.8.3) term-ansicolor (1.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.6) treetop (1.4.8) trollop (1.16.2) tzinfo (0.3.22) webrat (0.7.1) Version of XCode: 3.1.1 My suspicion is it has to do with "-I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0", because i686-darwin9.8.0 doesnt exist in that file

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >