Search Results

Search found 37604 results on 1505 pages for 'build script'.

Page 479/1505 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • What language should I use for making a cross platform library?

    - by Andrei
    I want to build a SyncML parsing library (no UI) which should be able to build up messages based on information provided by the host application, fed in by the library's methods. Also, the library should to be able to do callbacks to methods in the host application. I want to be able to compile this and have it available on as many platforms as possible: Windows, Windows Phone 7 OS, OSX, iOS, Linux, Android, BlackBerry. Basically as many platforms as possible. The priority is to have this available on mobile devices. Questions: What setup should I use? (programming languages, compilers, IDE etc.) How would I compile this library for these different platforms and how would I connect to it? Any other info? e.g. articles that cover the subject of cross-platform development? I haven't done this sort of a cross-platform project before, so any available information to put me in the right direction would be welcomed. Myself, I have a background in C#/.NET and Objective-C.

    Read the article

  • Runaway version store in tempdb

    - by DavidWimbush
    Today was really a new one. I got back from a week off and found our main production server's tempdb had gone from its usual 200MB to 36GB. Ironically I spent Friday at the most excellent SQLBits VI and one of the sessions I attended was Christian Bolton talking about tempdb issues - including runaway tempdb databases. How just-in-time was that?! I looked into the file growth history and it looks like the problem started when my index maintenance job was chosen as the deadlock victim. (Funny how they almost make it sound like you've won something.) That left tempdb pretty big but for some reason it grew several more times. And since I'd left the file growth at the default 10% (aaargh!) the worse it got the worse it got. The last regrowth event was 2.6GB. Good job I've got Instant Initialization on. Since the Disk Usage report showed it was 99% unallocated I went into the Shrink Files dialogue which helpfully informed me the data file was 250MB.  I'm afraid I've got a life (allegedly) so I restarted the SQL Server service and then immediately ran a script to make the initial size bigger and change the file growth to a number of MB. The script complained that the size was smaller than the current size. Within seconds! WTF? Now I had to find out what was using so much of it. By using the DMV sys.dm_db_file_space_usage I found the problem was in the version store, and using the DMV sys.dm_db_task_space_usage and the Top Transactions by Age report I found that the culprit was a 3rd party database where I had turned on read_committed_snapshot and then not bothered to monitor things properly. Just because something has always worked before doesn't mean it will work in every future case. This application had an implicit transaction that had been running for over 2 hours.

    Read the article

  • MPI Project Template for VS2010

    If you are developing MS MPI applications with Visual Studio 2010, you are probably tired of following some tedious steps for every new C++ project that you create, similar to the following:1. In Solution Explorer, right-click YourProjectName, then click Properties to open the Property Pages dialog box.2. Expand Configuration Properties and then under VC++ Directories place the cursor at the beginning of the list that appears in the Include Directories text box and then specify the location of the MS MPI C header files, followed by a semicolon, e.g.C:\Program Files\Microsoft HPC Pack 2008 SDK\Include;3. Still under Configuration Properties and under VC++ Directories place the cursor at the beginning of the list that appears in the Library Directories text box and then specify the location of the Microsoft HPC Pack 2008 SDK library file, followed by a semicolon, e.g.if you want to build/debug 32bit application:C:\Program Files\Microsoft HPC Pack 2008 SDK\Lib\i386;if you want to build/debug 64bit application:C:\Program Files\Microsoft HPC Pack 2008 SDK\Lib\amd64;4. Under Configuration Properties and then under Linker, select Input and place the cursor at the beginning of the list that appears in the Additional Dependencies text box and then type the name of the MS MPI library, i.e.msmpi.lib;5. In the code file#include "mpi.h"6. To debug the MPI project you have just setup, under Configuration Properties select Debugging and then switch the Debugger to launch combo value from Local Windows Debugger to MPI Cluster Debugger.Wouldn't it be great if at C++ project creation time you could choose an MPI Project Template that included the steps/configurations above? If you answered "yes", I have good news for you courtesy of a developer on our team (Qing). Feel free to download from Visual Studio gallery the MPI Project Template. Comments about this post welcome at the original blog.

    Read the article

  • Cant access a remote server due mistake by setting firewall rule

    - by LMIT
    I need help due a my silly mistake! So for long time i have a dedicate server hosted by register.it Usually i access remotly to this server (Windows 2008 server) by Terminal Server. Today i wanted to block one site that continually send request to my server. So i was adding a new rule in the firewall (the native firewall on windows 2008 server), as i did many time, but this time, probably i was sleeping with my brain i add a general rules that stop everything! So i cant access to the server anymore, as no any users can browse the sites, nothing is working because this rule block everything. I know that is a silly mistake, no need to tell me :) so please what i can do ? The only 1 thing that my provider let me is reboot the server by his control panel, but this not help me in any way because the firewall block me again. i have administrator username and password, so what i really can do ? there are some trick some tecnique, some expert guru that can help me in this very bad situation ? UPDATE i follow the Tony suggest and i did a NMAP to check if some ports are open but look like all closed: NMAP RESULT Starting Nmap 6.00 ( http://nmap.org ) at 2012-05-29 22:32 W. Europe Daylight Time NSE: Loaded 93 scripts for scanning. NSE: Script Pre-scanning. Initiating Parallel DNS resolution of 1 host. at 22:32 Completed Parallel DNS resolution of 1 host. at 22:33, 13.00s elapsed Initiating SYN Stealth Scan at 22:33 Scanning xxx.xxx.xxx.xxx [1000 ports] SYN Stealth Scan Timing: About 29.00% done; ETC: 22:34 (0:01:16 remaining) SYN Stealth Scan Timing: About 58.00% done; ETC: 22:34 (0:00:44 remaining) Completed SYN Stealth Scan at 22:34, 104.39s elapsed (1000 total ports) Initiating Service scan at 22:34 Initiating OS detection (try #1) against xxx.xxx.xxx.xxx Retrying OS detection (try #2) against xxx.xxx.xxx.xxx Initiating Traceroute at 22:34 Completed Traceroute at 22:35, 6.27s elapsed Initiating Parallel DNS resolution of 11 hosts. at 22:35 Completed Parallel DNS resolution of 11 hosts. at 22:35, 13.00s elapsed NSE: Script scanning xxx.xxx.xxx.xxx. Initiating NSE at 22:35 Completed NSE at 22:35, 0.00s elapsed Nmap scan report for xxx.xxx.xxx.xxx Host is up. All 1000 scanned ports on xxx.xxx.xxx.xxx are filtered Too many fingerprints match this host to give specific OS details TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 ... ... ... 13 ... 30 NSE: Script Post-scanning. Read data files from: D:\Program Files\Nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 145.08 seconds Raw packets sent: 2116 (96.576KB) | Rcvd: 61 (4.082KB) Question: The provider locally can access by username and password ?

    Read the article

  • run or send a command to a tmux pane in a running tmux session

    - by cjroebuck
    I want to write a shell script which will attach to a named tmux session, select a window (or pane) in that session and run a command in that selected window (or pane). How do I do this from a bash script? I know tmux new-window -n:mywindow 'exec something' allows you to send commands to a freshly created window. But I need something like tmux select-window -t:0 'my command' I suppose I could use send-keys but seems like their should be something that takes a command or list of commands that get run.

    Read the article

  • UPK 3.6.1 Enablement Service Pack 1

    - by marc.santosusso
    UPK 3.6.1 Enablement Service Pack 1 now available on My Oracle Support as Patch ID 9533920 (requires My Oracle Support account). Below is a list of the enhancements included in this Enablement Service Pack. Tabbed Gateway Users now have the option to deliver multiple help resources through the in-application support using UPK's new tabbed gateway. This feature is managed using the Configuration Utility for In-Application Support. This feature is documented in the In-Application Support Guide. Firefox 3.6 The latest release of Mozilla Firefox, version 3.6, is now supported by the UPK Player, SmartHelp browser add-on, and SmartMatch recording technology. Oracle E-Business Suite -- Added support for version 12.1.2 for enhanced object and context recognition. -- The UPK PLL is no longer need for Oracle versions 12.1.2 and higher. Agile PLM Agile PLM version 9.3 supported for enhanced object recognition. Customer Needs Management Customer Needs Management schema 1.0.014 is supported for context recognition. Siebel CRM Siebel CRM (On Premise) versions 8.2, 8.1.1.2, 8.0.0.9, and 8.1.1 build 21112 (in addition to the previously supported build 21111) supported for enhanced object and context recognition. SAP SAP GUI for HTML version 7.10 patch 16 supported for enhanced object and context recognition. CA -- CA Clarity PPM version R12.5 supported for context recognition. -- CA Service Desk version R12.5 supported for context recognition. Java Added support for Java 6 update 12

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • I/O Error with PHP5-FPM, ptrace(PEEKDATA) failed

    - by MultiformeIngegno
    I got a lot of these: [NOTICE] child 19214 stopped for tracing [NOTICE] about to trace 19214 [ERROR] ptrace(PEEKDATA) failed: Input/output error (5) [NOTICE] finished trace of 19214 [WARNING] [pool www] child 19208, script 'blahblah.php' executing too slow (30.041419 sec), logging [NOTICE] child 19208 stopped for tracing [NOTICE] about to trace 19208 [ERROR] ptrace(PEEKDATA) failed: Input/output error (5) [NOTICE] finished trace of 19208 [WARNING] [pool www] child 19218, script 'blahblah.php' executing too slow (30.035029 sec), logging And when php reaches max children (at least I presume that's the case) it stops "working"... now I know I can increase max_children (currently set to 9) but there's a way to stop php from "dying"? I'm on a VPS with 1 core and 512 MB of RAM (PHP5-FPM 5.4.4 + APC 3.1.10).

    Read the article

  • When Citrix desktop disconnects SAP Client holds session and can't log back in

    - by Stradas
    We have a fairly large Citrix implementation and have just pushed out SAP desktop client to all of the desktops. Everything else is working fine except the following problem: If a user Disconnects their session and the session is running the SAP client, (logging off works fine) the user can not reconnect and log back in. We have a script on the server that terminates the session as a work around. We can see on the server that it is the SAP client that is holding on and running. This is at a large office, but the SAP servers are in another hemisphere. As is the custom Citrix says its SAP and SAP says it is Citrix. I don't like using a powershell script as a substitute for a system configuration solution.

    Read the article

  • Command line tool to write flac, ogg vorbis and mp3 id3v2 tags?

    - by burzum
    Is there any command line tool that can write all three formats/containers? I've already searched but could not find anything that does the job. So far I'm using vorbiscomment, metaflac and id3tool and I really would like to replace them by a single tool if possible. If there is no tool than can write them all, is there at least any suggestion to replace id3tool with something that can write id3v2 (v2.4) tags at least? I'm not looking for a tagger but for a tool that will allow me to write meta data by a script to the different audio files. My current status is that I have a script that uses the three tools (vorbiscomment, metaflac and id3tool) but then i realized that id3tool can not write id3v2 tags... I'm creating automatically these 3 audio formats from a wav master and need to be able to automate the meta data writing to these files.

    Read the article

  • OpenGL texture on sphere

    - by Cilenco
    I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere: public Ball(GL10 gl, float radius) { ByteBuffer bb = ByteBuffer.allocateDirect(40000); bb.order(ByteOrder.nativeOrder()); sphereVertex = bb.asFloatBuffer(); points = build(); } private int build() { double dTheta = STEP * Math.PI / 180; double dPhi = dTheta; int points = 0; for(double phi = -(Math.PI/2); phi <= Math.PI/2; phi+=dPhi) { for(double theta = 0.0; theta <= (Math.PI * 2); theta+=dTheta) { sphereVertex.put((float) (raduis * Math.sin(phi) * Math.cos(theta))); sphereVertex.put((float) (raduis * Math.sin(phi) * Math.sin(theta))); sphereVertex.put((float) (raduis * Math.cos(phi))); points++; } } sphereVertex.position(0); return points; } public void draw() { texture.bind(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, sphereVertex); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture: 0|0 0|1 1|1 1|0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly?

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • How do I run unsigned driver permanently?

    - by acidzombie24
    I compiled my own truecrypt for windows and doing the suggested F8 to run unsigned drivers worked. I was able to install and run truecrypt successfully. However when i restart the machine and let it boot normally I can no longer use truecrypt because of unsigned drivers even though I have it installed. What can I do to boot normally and use my build of truecrypt? Can i modify the registry, add an exception, can i sign my own driver without paying anything (but it probably only work on my machine which is 100% fine), What can i do to run my own truecrypt build on a normal bootup?

    Read the article

  • NEW 2-Day Instructor Led Course on Oracle Data Mining Now Available!

    - by chberger
    A NEW 2-Day Instructor Led Course on Oracle Data Mining has been developed for customers and anyone wanting to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database.  Course Objectives: Explain basic data mining concepts and describe the benefits of predictive analysis Understand primary data mining tasks, and describe the key steps of a data mining process Use the Oracle Data Miner to build,evaluate, and apply multiple data mining models Use Oracle Data Mining's predictions and insights to address many kinds of business problems, including: Predict individual behavior, Predict values, Find co-occurring events Learn how to deploy data mining results for real-time access by end-users Five reasons why you should attend this 2 day Oracle Data Mining Oracle University course. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, you will learn to gain insight and foresight to: Go beyond simple BI and dashboards about the past. This course will teach you about "data mining" and "predictive analytics", analytical techniques that can provide huge competitive advantage Take advantage of your data and investment in Oracle technology Leverage all the data in your data warehouse, customer data, service data, sales data, customer comments and other unstructured data, point of sale (POS) data, to build and deploy predictive models throughout the enterprise. Learn how to explore and understand your data and find patterns and relationships that were previously hidden Focus on solving strategic challenges to the business, for example, targeting "best customers" with the right offer, identifying product bundles, detecting anomalies and potential fraud, finding natural customer segments and gaining customer insight.

    Read the article

  • SVN Export or Recursively Remove .SVN Folders

    - by Ben Griswold
    I shared this script with a coworker yesterday. It doesn’t do much; it recursively deletes .svn folders from a source tree.  It comes in handy if you want to share your codebase or you get in a terrible spot with SVN and you just want to start all over. Just blow away all svn artifacts and use your mulligan. It’s true. You can nearly get the same result using the SVN export command which copies your source sans the .svn folders to an alternate location.  The catch is an export only includes those files/folders which exist under version control.  If you want a clean copy of your source – versioned or not – export just might not do. The contents of the .cmd file include the following: for /f "tokens=* delims=" %%i in (’dir /s /b /a:d *.svn’) do ( rd /s /q "%%i" ) Just download and drop the unzipped “SVN Cleanup.cmd” file into the root of the project, execute and away you go.  If you search around enough, I know you can find similar scripts and approaches elsewhere, but I’m still uploading my script for completeness and future reference. Download SVN Cleanup

    Read the article

  • Vmware software installation error

    - by Perry
    I am trying to install Vmware software, but I am facing the following error: Selecting previously unselected package vmware-view-client:i386. (Reading database ... 239594 files and directories currently installed.) Unpacking vmware-view-client:i386 (from .../vmware-view-client_2.1.0-0ubuntu0.12.04_i386.deb) ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Setting up icaclient:i386 (12.1.0) ... dpkg: error processing icaclient:i386 (--configure): subprocess installed post-installation script returned error exit status 2 Setting up vmware-view-client:i386 (2.1.0-0ubuntu0.12.04) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: icaclient:i386 E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up icaclient:i386 (12.1.0) ... dpkg: error processing icaclient:i386 (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: icaclient:i386 Any suggestions on how to fix this issue? Thanks in advance

    Read the article

  • Where is the SQL Azure Development Environment

    - by BuckWoody
    Recently I posted an entry explaining that you can develop in Windows Azure without having to connect to the main service on the Internet, using the Software Development Kit (SDK) which installs two emulators - one for compute and the other for storage. That brought up the question of the same kind of thing for SQL Azure. The short answer is that there isn’t one. While we’ll make the development experience for all versions of SQL Server, including SQL Azure more easy to write against, you can simply treat it as another edition of SQL Server. For instance, many of us use the SQL Server Developer Edition - which in versions up to 2008 is actually the Enterprise Edition - to develop our code. We might write that code against all kinds of environments, from SQL Express through Enterprise Edition. We know which features work on a certain edition, what T-SQL it supports and so on, and develop accordingly. We then test on the actual platform to ensure the code runs as expected. You can simply fold SQL Azure into that same development process. When you’re ready to deploy, if you’re using SQL Server Management Studio 2008 R2 or higher, you can script out the database when you’re done as a SQL Azure script (with change notifications where needed) by selecting the right “Engine Type” on the scripting panel: (Thanks to David Robinson for pointing this out and my co-worker Rick Shahid for the screen-shot - saved me firing up a VM this morning!) Will all this change? Will SSMS, “Data Dude” and other tools change to include SQL Azure? Well, I don’t have a specific roadmap for those tools, but we’re making big investments on Windows Azure and SQL Azure, so I can say that as time goes on, it will get easier. For now, make sure you know what features are and are not included in SQL Azure, and what T-SQL is supported. Here are a couple of references to help: General Guidelines and Limitations: http://msdn.microsoft.com/en-us/library/ee336245.aspx Transact-SQL Supported by SQL Azure: http://msdn.microsoft.com/en-us/library/ee336250.aspx SQL Azure Learning Plan: http://blogs.msdn.com/b/buckwoody/archive/2010/12/13/windows-azure-learning-plan-sql-azure.aspx

    Read the article

  • calling a different python interpreter from bash command line

    - by Dennis Daniels
    I have python 2.7 installed [user@localhost google_appengine]$ python Python 2.7 (r27:82500, Sep 16 2010, 18:03:06) [GCC 4.5.1 20100907 (Red Hat 4.5.1-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. I want to use the python 2.5.2 that is in this directory [user@localhost Downloads]$ ls |grep "Python-2*" Python-2.5.2 Python-2.5.2.tgz to run a python script in Khan Academy platform against a google app engine application sudo python sample_data.py -a ~/workspace/GAE/google_appengine/appcfg.py upload Currently, when running the last script 2.7 python complains a lot (Google App Engine runs on 2.5.2 mostly and 2.6 almost) I would like to do something like sudo python env set ~/Downloads/Python-2.5.2 sample_data.py -a ~/workspace/GAE/google_appengine/appcfg.py upload Is this possible? If yes, please point the way. If not, please suggest a way to call python2.5.2 WITHOUT having to uninstall python 2.7 many many thanks Dennis

    Read the article

  • Default permissions for courier imap folders

    - by JoeCoder
    I'm using courier imap. When a mail client creates a new folder, it's created on the filesystem with 640 permission. I need it to be writable by the group, or 660. I currently have /etc/courier/imapd IMAP_UMASK=007, but that's not enough. I'm not sure what else to try. Any ideas? I'm using ubuntu server 12.04. EDIT: I added a 50pt bounty to this. For an acceptable answer, I need a way to make it work from a package in a standard repo. If I download source and compile it myself, it won't be automatically kept up to date with security fixes. If I don't find a better answer, I'll add code to the admin script to call another sudo approved script to chmod -R the whole directory before every change. But this is kind of hack-ish.

    Read the article

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • How to find last login user on given computer name on AD

    - by user1215305
    If anyone could help, it would be appreciated. I am trying to audit the network. I have list of computers that are active in last 3 months in domain. I have little script that run when user login to the computer and writes the file with computer name. I have 39 computers and only on 23 computers the script was run. So I am chasing up with other 16 computers. My question is how can I find out who has last login to the given computer name? Many thanks in advance

    Read the article

  • XML Schema For MBSA Reports

    - by Steve Hawkins
    I'm in the process of creating a script to run the command line version of Microsoft Baseline Security Analyzer (mbsacli.exe) against all of our servers. Since the MBSA reports are provided as XML documents, I should be able to write a script or small program to parse the XML looking for errors / issues. I'm wondering if anyone knows whether or not the XML schema for the MBSA reports is documented anywhere -- I have goggled this, and cant seem to find any trace of it. I've run across a few articles that address bits and pieces, but nothing that addresses the complete schema. Yes, I could just reverse engineer the XML, but I would like to understand a little more about the meaning of some of the tags. Thanks...

    Read the article

  • zero-config CGI enabled web server

    - by halp
    To serve static content of a directory over http, one can simply navigate to that directory and type: python -m SimpleHTTPServer 11111 which will start a http server on port 11111. This hack is nice because it requires zero-config: no stand-alone web server, no config files at all. Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support? The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser. Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.

    Read the article

  • Cant connect to asterisk internal database [on hold]

    - by Bilbo
    Im trying to get a PHP script to connect to Asterisks internal mysql database. I tried the to use the standard method for example $con = mysqli_connect("192.168.1.126","root","mysql","asterisk"); However when I log into the asterisk server to access the mysql database all i need it to type "mysql" and im logged in. Im wondering is it possible for my php script to connect to asterisk internal database. The following error is shown: Warning: mysqli_connect(): (HY000/2003): Can't connect to MySQL server on '192.168.1.126' (111) in /var/www/html/project/sipSubScript.php on line 6 Failed to connect to MySQL: Can't connect to MySQL server on '192.168.1.126' (111)

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >