Search Results

Search found 64033 results on 2562 pages for 'andrew siemer www andrewsiemer com'.

Page 227/2562 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Running a bash script from an HTML link or button

    - by Andrew
    I have a webserver that's hosting lots of images. I want the client to be able to press a button or a link, which will run a bash script, which will create a video based on all these pictures. The script I'm trying to run is this: #!/bin/bash # cd to the directory cd /var/www/gallery # use ffmpeg to make video ffmpeg -pattern_type glob -i 'img-*jpg' -r 1 video.mp4 # Take the first file in the directory and name it video.mp4.jpg (for thumbnail) cp `ls | sort -n | head -1` video.mp4.jpg The script is located on the server. So when the client clicks the link or button, the script will run, and the video is created. I've tried both solutions listed here but I can't seem to get it to work. I have php installed on my server.

    Read the article

  • Change ownership of directory and all contents to a new user from root.

    - by Andrew Fashion
    I created a website under /var/www/html/ all under root, all images, files, .htacess, directories, etc... I uploaded and configured everything as root. I want to make it it's own username/password so it's not owned by root. I currently do not have the user account made either, I want to also setup FTP for the user account. There is also about 30GB of images in the folder as well. How can I go about changing all of this? I am running CentOS 5.5 64 bit. Thank you!

    Read the article

  • Remote desktop connection over internet without port forwarding?

    - by hellbell.myopenid.com
    Hello, let's say that we have this situation. I want to remote desktop connection to my friend over the internet, but I don't have premission for port forwarding on the router, and my friend also can't configure his router. So the question is how to connect to computer without port forwarding, I know that is out there some programs like teamviewer, or some else that solve that task, but what I looking for is the some free site that can make "bridge" between are two computer, or is it possible to install on computer some program that simulate virtual router or something like this http://www.youtube.com/watch?v=SIof7kFTgJE .... I need this cause I have my own simple remote desktop connection program, but I can't connect to other computer outside network cause don't have premission to configure router :( any comment, link, advice, or tutorials will be very helpful :)

    Read the article

  • Do NOT Change "Copy Local” project references to false, unless understand subsequences.

    - by Michael Freidgeim
    To optimize performance of visual studio build I've found multiple recommendations to change CopyLocal property for dependent dlls to false,e.g. From http://stackoverflow.com/questions/690033/best-practices-for-large-solutions-in-visual-studio-2008 CopyLocal? For sure turn this offhttp://stackoverflow.com/questions/280751/what-is-the-best-practice-for-copy-local-and-with-project-referencesAlways set the Copy Local property to false and enforce this via a custom msbuild stephttp://codebetter.com/patricksmacchia/2007/06/20/benefit-from-the-c-and-vb-net-compilers-perf/BenefitBenefitMy advice is to always set ‘Copy Local’ to falseSome time ago we've tried to change the setting to false, and found that it causes problem for deployment of top-level projects.Recently I've followed the suggestion and changed the settings for middle-level projects. It didn't cause immediate issues, but I was warned by Readify Consultant Colin Savage about possible errors during deploymentsI haven't undone the changes immediately and we found a few issues during testing.There are many scenarios, when you need to have Copy Local’ left to True.The concerns are highlighted in some stack overflow answers, but they have small number of votes.Top-level projects:  set copy local = true.First of all, it doesn't work correctly for top-level projects, i.e. executables or web sites.As pointed in the answer http://stackoverflow.com/a/6529461/52277for all the references in the one at the top set copy local = true.Alternatively you have to change output directory as it's described in http://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/If you set ‘ Copy Local = false’, VS will, unless you tell it otherwise, place each assembly alone in its own .\bin\Debugdirectory. Because of this, you will need to configure VS to place assemblies together in the same directory. To do so, for each VS project, go to VS > Project Properties > Build tab > Output path, and set the Ouput path to ..\bin\Debugfor debug configuration, and ..\bin\Release for release configuration.Second-level  dependencies:  set copy local = true.Another example when copylocal =false fails on run-time, is when top level assembly doesn't directly referenced one of indirect dependencies.E..g. Top-level assembly A has reference to assembly B with copylocal =true, but assembly B has reference to assembly C with copylocal =false. Most likely assembly C will be missing on runtime and will cause errors E.g. http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1Copy local is important for deployment scenarios and tools. As a general rule you should use CopyLocal=True and http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1 Unfortunately there are some quirks and CopyLocal won't necessary work as expected for assembly references in secondary assemblies structured as shown below.MainApp.exe MyLibrary.dll ThirdPartyLibrary.dll (if in the GAC CopyLocal won't copy to MainApp bin folder)This makes xcopy deployments difficult . .Reflection called DLLs  dependencies:  set copy local = true.E.g user can see error "ISystem.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information."The fix for the issue is recommended in http://stackoverflow.com/a/6200173/52277"I solved this issue by setting the Copy Local attribute of my project's references to true."In general, the problems with investigation of deployment issues may overweight the benefits of reduced build time. Setting the Copy Local to false without considering deployment issues is not a good idea.

    Read the article

  • about the JOGL 2 problem

    - by Chuchinyi
    Please some help me about the JOGL 2 problem(Sorry for previous error format). I complied JOGL2Template.java ok. but execut it with following error. D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more there is JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } }

    Read the article

  • Detecting Process Shutdown/Startup Events through ActivationAgents

    - by Ramkumar Menon
    @10g - This post is motivated by one of my close friends and colleague - who wanted to proactively know when a BPEL process shuts down/re-activates. This typically happens when you have a BPEL Process that has an inbound polling adapter, when the adapter loses connectivity to the source system. Or whatever causes it. One valuable suggestion came in from one of my colleagues - he suggested I write my own ActivationAgent to do the job. Well, it really worked. Here is a sample ActivationAgent that you can use. There are few methods you need to override from BaseActivationAgent, and you are on your way to receiving notifications/what not, whenever the shutdown/startup events occur. In the example below, I am retrieving the emailAddress property [that is specified in your bpel.xml activationAgent section] and use that to send out an email notification on the activation agent initialization. You could choose to do different things. But bottomline is that you can use the below-mentioned API to access the very same properties that you specify in the bpel.xml. package com.adapter.custom.activation; import com.collaxa.cube.activation.BaseActivationAgent; import com.collaxa.cube.engine.ICubeContext; import com.oracle.bpel.client.BPELProcessId; import java.util.Date; import java.util.Properties; public class LifecycleManagerActivationAgent extends BaseActivationAgent { public BPELProcessId getBPELProcessId() { return super.getBPELProcessId(); } private void handleInit() throws Exception { //Write initialization code here System.err.println("Entered initialization code...."); //e.g. String emailAddress = getActivationAgentDescriptor().getPropertyValue(emailAddress); //send an email sendEmail(emailAddress); } private void handleLoad() throws Exception { //Write load code here System.err.println("Entered load code...."); } private void handleUnload() throws Exception { //Write unload code here System.err.println("Entered unload code...."); } private void handleUninit() throws Exception { //Write uninitialization code here System.err.println("Entered uninitialization code...."); } public void init(ICubeContext icubecontext) throws Exception { super.init(icubecontext); System.err.println("Initializing LifecycleManager Activation Agent ....."); handleInit(); } public void unload(ICubeContext icubecontext) throws Exception { super.unload(icubecontext); System.err.println("Unloading LifecycleManager Activation Agent ....."); handleUnload(); } public void uninit(ICubeContext icubecontext) throws Exception{ super.uninit(icubecontext); System.err.println("Uninitializing LifecycleManager Activation Agent ....."); handleUninit(); } public String getName() { return "Lifecyclemanageractivationagent"; } public void onStateChanged(int i, ICubeContext iCubeContext) { } public void onLifeCycleChanged(int i, ICubeContext iCubeContext) { } public void onUndeployed(ICubeContext iCubeContext) { } public void onServerShutdown() { } } Once you compile this code, generate a jar file and ensure you add it to the server startup classpath. The library is ready for use after the server restarts. To use this activationAgent, add an additional activationAgent entry in the bpel.xml for the BPEL Process that you wish to monitor. After you deploy the process, the ActivationAgent object will be called back whenever the events mentioned in the overridden methods are raised. [init(), load(), unload(), uninit()]. Subsequently, your custom code is executed. Sample bpel.xml illustrating activationAgent definition and property definition. <?xml version="1.0" encoding="UTF-8"? <BPELSuitcase timestamp="1291943469921" revision="1.0" <BPELProcess wsdlPort="{http://xmlns.oracle.com/BPELTest}BPELTestPort" src="BPELTest.bpel" wsdlService="{http://xmlns.oracle.com/BPELTest}BPELTest" id="BPELTest" <partnerLinkBindings <partnerLinkBinding name="client" <property name="wsdlLocation"BPELTest.wsdl</property </partnerLinkBinding <partnerLinkBinding name="test" <property name="wsdlLocation"test.wsdl</property </partnerLinkBinding </partnerLinkBindings <activationAgents <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="test" <property name="portType"Read_ptt</property </activationAgent <activationAgent className="com.oracle.bpel.activation.LifecycleManagerActivationAgent" partnerLink="test" <property name="emailAddress"[email protected]</property </activationAgent </activationAgents </BPELProcess </BPELSuitcase em

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • Using Solaris zfs + iscsi targets with Oracle VM

    - by wim.coekaerts
    I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a solaris box, that I use for Oracle VDI, available. I set up a few iscsi targets on this solaris server and exported them to my 2 Oracle VM servers. Here's how I did this : (1) On the solaris side : # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 544G 129G 415G 23% ONLINE - I just have a simple zpool, called rpool, on this box. It has plenty of space available for my needs. So I will use rpool and I will create 5 50gb vols : zfs create -V 50G rpool/ovm1 zfs create -V 50G rpool/ovm2 zfs create -V 50G rpool/ovm3 zfs create -V 50G rpool/ovm4 zfs create -V 50G rpool/ovm5 I want to use these volumes for iscsi so I have to enable them as shared iscsi devices : zfs set shareiscsi=on rpool/ovm1 zfs set shareiscsi=on rpool/ovm2 zfs set shareiscsi=on rpool/ovm3 zfs set shareiscsi=on rpool/ovm4 zfs set shareiscsi=on rpool/ovm5 The command iscsitadm list target should list these devices so make sure they show up. # iscsitadm list target Target: rpool/ovm1 iSCSI Name: iqn.1986-03.com.sun:02:896c766c-0943-4da5-d47e-9575b5a0be36 Connections: 2 Target: rpool/ovm2 iSCSI Name: iqn.1986-03.com.sun:02:a3116b46-73e0-e8c2-e80c-9a4f71aff069 Connections: 2 Target: rpool/ovm3 iSCSI Name: iqn.1986-03.com.sun:02:a838c400-2730-c0d6-f2c2-ee186a0261c1 Connections: 2 Target: rpool/ovm4 iSCSI Name: iqn.1986-03.com.sun:02:2e046afb-d66d-4f3f-c5de-8115e0ddd931 Connections: 2 Target: rpool/ovm5 iSCSI Name: iqn.1986-03.com.sun:02:66109fbe-81ac-ef05-f85e-ab8c1f34cb43 Connections: 2 At this point I want to make sure that I have some access control on these devices. To make it easier, I will create an alias for my 2 servers and use the alias for the ACL. get the iqn from the 2 servers on my 2 ovm servers (wcoekaer-srv1, wcoekaer-srv2) get the content of /etc/iscsi/initiatorname.iscsi (for each server) InitiatorName=iqn.1986-03.com.sun:01:2a7526f0ffff On the solaris side create the aliases : iscsitadm create initiator -n iqn.1986-03.com.sun:01:2a7526f0ffff wcoekaer-srv1 iscsitadm create initiator -n iqn.1986-03.com.sun:01:e31b08110f1 wcoekaer-srv5 Add the ACL to the targets : iscsitadm modify target -l wcoekaer-srv1 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm5 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm5 (2) the Oracle VM side On each server just do 2 simple things : # iscsiadm -m discovery -t sendtargets -p ca-vdi1 where ca-vdi1 is my solaris server name # service iscsi restart When I do cat /proc/partitions on my servers I will see the devices show up # cat /proc/partitions major minor #blocks name 8 0 160836480 sda 8 1 104391 sda1 8 2 3148740 sda2 8 3 1052257 sda3 253 0 6377804 dm-0 253 1 6377804 dm-1 253 2 6377804 dm-2 8 16 52428800 sdb 8 32 52428800 sdc 8 48 52428800 sdd 8 80 52428800 sdf 8 64 52428800 sde These 5 new devices sd[b..f] are shared storage for Oracle VM and can be used to pass through to the VM's as phy: devices or put ocfs2 on it and use as shared filesystem storage for dom0 repositories. I am setting up an 11gR2 rac template (the cool stuff Saar did) so I am using my devices to create a 2 node RAC cluster with phy: devices.

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • How to deal with overly aggressive "Link Take Down Demands"?

    - by Eoin
    I've been receiving a large number of emails recently requesting I clean from link spam from my forum. Initially the emails were very polite and professional, and I was happy to remove the links. Recently the email have gotten very abrasive, here is a particularly rude example: From: dmcaviolations@company-one.com To: id@privacypost.com Hi, This is the second time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We really do need to remove this link. We have to report to Google any link we were unable to remove, and I wouldn't want to have to include your site in the list. Could you please remove our link from this page and any other page on your site? Thank You, Name Changed Behind the superficial pleasantries I feel there is some very real maliciousness. Note the email address, DMCA Violations, I don't see how the DMCA is involved here, except as a word which tends to strike fear in many people. Also relating to the email address, it doesn't match the company being linked to at all. How am I to trust they are truely operating on behalf of company-two when they don't even use one of it's email addresses. My email is hidden by privacypost. While a service with legitimate uses, I feel it's highly unprofessional for communications between to companies. The claim "This is the second time..." Every email I've received has started like this, but a check of my spam filters has never revealed a 1st mail. Initially I gave them the benefit of the doubt, by now though it's clear this is a cheap ploy to start me off on the defensive. And finally worst of all- the threats of reporting me to Google if I don't do everything they ask. I sent a polite reply asking for more information. I have no idea if the email address was even valid but I never received any response. Much later I got this followup mail From: name-changed@company-one.com To: id@privacypost.com Hi, This is the final time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We will soon be reporting to Google any link we were unable to remove, and currently your site will have to be on the list. Could you please remove our link from this page and any other page on your site? I appreciate your urgent attention to this matter. Thank You, Name Changed This time the from address was more personal, though still not obviously connected to the spammed company. Lets be honest, I don't for one second believe that the companies were the victim of a 3rd party spammer as they claim. The links in questions were generated well over a year ago, and I firmly believe the companies were directly responsible for the spam links in question, a type of spam that has plagued my forum. Now they have the audacity to demand I spend my time cleaning up their mess, using threats to ensure they get their way. Have recent changes in Googles algorithms meant all the cash they spent spamming the web has now turned into a liability? If so I can see why these companies are all of a sudden running scared. Frankly, cleaning up my forum is a good things, but the threats they are using sickens me. So my question here is specifically about the threats: Are they vaild, and would such reports to Google destroy my page rankings? Is there a way I can report this abusive behaviour to Google?

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Postfix - am I sending spam?

    - by olrehm
    today I received like 30 messages within 5 minutes telling me that some mail I send could not be delivered, mostly to *.ru email addresses which I did not send any mail to. I have my own webserver (postfix/dovecot) set up using this guide (http://workaround.org/ispmail/lenny) but adjusted a little bit for Ubuntu. I tested whether I am an Open Relay which I am apparently not. Now there are two possible reasons for the above mentioned emails: Either I am sending out spam, or somebody wants me to think that, correct? How can I check this? I selected one particular address that I supposedly send spam to. Then I searched my mail.log for this entry. I found two blocks that record that somebody from the server connected to my server and delivered some message to two different users. I cannot find an entry reporting that anyone from my server send an email to that server. Does this mean its just some mail to scare me or could it still have been send by me in the first place? Here is one such block from the log (I replaced some confidential stuff): Jun 26 23:23:28 mycustomernumber postfix/smtpd[29970]: connect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: 044991528995: client=mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/cleanup[29974]: 044991528995: message-id=<[email protected]> Jun 26 23:23:29 mycustomernumber postfix/qmgr[3369]: 044991528995: from=<>, size=2198, nrcpt=1 (queue active) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) ESMTP::10024 /var/lib/amavis/tmp/amavis-20110626T223137-28598: <> -> <[email protected]> SIZE=2198 Received: from mycustomernumber.stratoserver.net ([127.0.0.1]) by localhost (rehmsen.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP for <[email protected]>; Sun, 26 Jun 2011 23:23:29 +0200 (CEST) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) Checking: YakjkrdFq6A8 [195.144.251.97] <> -> <[email protected]> Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: disconnect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) lookup_sql_field(id) (WARN: no such field in the SQL table), "someuser@myserver.com" result=undef Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: connect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: 0A1FA1528A21: client=localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/cleanup[29974]: 0A1FA1528A21: message-id=<[email protected]> Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: from=<>, size=2841, nrcpt=1 (queue active) Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: disconnect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) FWD via SMTP: <> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21 Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) Passed CLEAN, [195.144.251.97] [195.144.251.97] <> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: YakjkrdFq6A8, Hits: 2.249, size: 2197, queued_as: 0A1FA1528A21, 2882 ms Jun 26 23:23:32 mycustomernumber postfix/smtp[29975]: 044991528995: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.3, delays=0.39/0.01/0.01/2.9, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21) Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 044991528995: removed Jun 26 23:23:33 mycustomernumber postfix/smtp[29980]: 0A1FA1528A21: to=<[email protected]>, orig_to=<[email protected]>, relay=mx3.hotmail.com[65.54.188.110]:25, delay=1.2, delays=0.15/0.02/0.51/0.55, dsn=2.0.0, status=sent (250 <[email protected]> Queued mail for delivery) Jun 26 23:23:33 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: removed Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection rate 1/60s for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection count 1 for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max cache size 1 at Jun 26 23:23:28 I can provide more info if you tell me what you need to know. Thank you for you help!

    Read the article

  • BizTalk Business Rules Engine - Repeating Elements Question

    - by Andrew Cripps
    Hello I'm trying to create what I think should be a relatively simple business rule to operate over repeating elements in an XML schema. Consider the following XML snippet (this is simplified with namespaces removed, for readability): <Root> <AllAccounts> <Account id="1" currentPayment="10.00" arrearsAmount="25.00"> <AllCustomers> <Customer id="20" primary="true" canSelfServe="false" /> <Customer id="21" primary="false" canSelfServe="false" /> </AllCustomers> </Account> <Account id="2" currentPayment="10.00" arrearsAmount="15.00"> <AllCustomers> <Customer id="30" primary="true" canSelfServe="false" /> <Customer id="31" primary="false" canSelfServe="false" /> </AllCustomers> </AllAccounts> </Root> What I want to do is to have two rules: Set /Root/AllAccounts/Account[x]/AllCustomers/Customer[primary='true']/canSelfServe = true IF arrearsAmount < currentPayment Set /Root/AllAccounts/Account[x]/AllCustoemrs/Customer[primary='true']/canSelfServer = false IF arrearsAmount = currentPayment Where [x] is 0...number of /Root/AllAccounts/Account records present in the XML. I've tried two simple rules for this, and each rule seems to fire x * x times, where x is the number of Account records in the XML. I only want each rule to fire once for each Account record. Any help greatly appreciated! Thanks Andrew

    Read the article

  • Setting custom WCF-binding behaviour via .config file - why doesn't this work?

    - by Andrew Shepherd
    I am attempting to insert a custom behavior into my service client, following the example here. I appear to be following all of the steps, but I am getting a ConfigurationErrorsException. Is there anyone more experienced than me who can spot what I'm doing wrong? Here is the entire app.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientLoggingEndpointBehaviour"> <myLoggerExtension /> </behavior> </endpointBehaviors> </behaviors> <extensions> <behaviorExtensions> <add name="myLoggerExtension" type="ChatClient.ClientLoggingEndpointBehaviourExtension, ChatClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> <bindings> </bindings> <client> <endpoint behaviorConfiguration="ClientLoggingEndpointBehaviour" name="ChatRoomClientEndpoint" address="http://localhost:8016/ChatRoom" binding="wsDualHttpBinding" contract="ChatRoomLib.IChatRoom" /> </client> </system.serviceModel> </configuration> Here is the exception message: An error occurred creating the configuration section handler for system.serviceModel/behaviors: Extension element 'myLoggerExtension' cannot be added to this element. Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions. Parameter name: element (C:\Documents and Settings\Andrew Shepherd\My Documents\Visual Studio 2008\Projects\WcfPractice\ChatClient\bin\Debug\ChatClient.vshost.exe.config line 5) I know that I've correctly written the reference to the ClientLoggingEndpointBehaviourExtensionobject, because through the debugger I can see it being instantiated.

    Read the article

  • Trouble running setup package after Publishing in Visual Studio 2008

    - by Andrew Cooper
    I've got a small winform application that I've written that is running fine in the IDE. It builds with no errors or warnings. It's not using any third party controls. I'm coding in C# in Visual Studio 2008. When I Build -- Publish the application, everything seems to work fine. However, when I go and attempt to install the application via the setup.exe file I get an error message that says, "Application cannot be started." The error details are below: ERROR DETAILS Following errors were detected during this operation. * [3/18/2010 10:50:56 AM] System.Runtime.InteropServices.COMException - The referenced assembly is not installed on your system. (Exception from HRESULT: 0x800736B3) - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IStore.GetAssemblyInformation(UInt32 Flags, IDefinitionIdentity DefinitionIdentity, Guid& riid) at System.Deployment.Internal.Isolation.Store.GetAssemblyManifest(UInt32 Flags, IDefinitionIdentity DefinitionIdentity) at System.Deployment.Application.ComponentStore.GetAssemblyManifest(DefinitionIdentity asmId) at System.Deployment.Application.ComponentStore.GetSubscriptionStateInternal(DefinitionIdentity subId) at System.Deployment.Application.SubscriptionStore.GetSubscriptionStateInternal(SubscriptionState subState) at System.Deployment.Application.ComponentStore.CollectCrossGroupApplications(Uri codebaseUri, DefinitionIdentity deploymentIdentity, Boolean& identityGroupFound, Boolean& locationGroupFound, String& identityGroupProductName) at System.Deployment.Application.SubscriptionStore.CommitApplication(SubscriptionState& subState, CommitApplicationParams commitParams) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) I'm not sure what else to do. The only slightly odd thing I used in this application is the SQL Compact Server. Any help would be appreciated. Thanks, Andrew

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • OpenBSD configuration: Client unable to automount via NFS using amd

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • Setting custom behaviour via .config file - why doesn't this work?

    - by Andrew Shepherd
    I am attempting to insert a custom behavior into my service client, following the example here. I appear to be following all of the steps, but I am getting a ConfigurationErrorsException. Is there anyone more experienced than me who can spot what I'm doing wrong? Here is the entire app.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientLoggingEndpointBehaviour"> <myLoggerExtension /> </behavior> </endpointBehaviors> </behaviors> <extensions> <behaviorExtensions> <add name="myLoggerExtension" type="ChatClient.ClientLoggingEndpointBehaviourExtension, ChatClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> <bindings> </bindings> <client> <endpoint behaviorConfiguration="ClientLoggingEndpointBehaviour" name="ChatRoomClientEndpoint" address="http://localhost:8016/ChatRoom" binding="wsDualHttpBinding" contract="ChatRoomLib.IChatRoom" /> </client> </system.serviceModel> </configuration> Here is the exception message: An error occurred creating the configuration section handler for system.serviceModel/behaviors: Extension element 'myLoggerExtension' cannot be added to this element. Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions. Parameter name: element (C:\Documents and Settings\Andrew Shepherd\My Documents\Visual Studio 2008\Projects\WcfPractice\ChatClient\bin\Debug\ChatClient.vshost.exe.config line 5) I know that I've correctly written the reference to the ClientLoggingEndpointBehaviourExtensionobject, because through the debugger I can see it being instantiated.

    Read the article

  • PHP - XML Feed get print values

    - by danit
    Here is my feed: <entry> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')</id> <title type="text">Building Web Applications with Microsoft SQL Azure</title> <summary type="text">SQL Azure provides a highly available and scalable relational database engine in the cloud. In this demo-intensive and interactive session, learn how to quickly build web applications with SQL Azure Databases and familiar web technologies. We demonstrate how you can quickly provision, build and populate a new SQL Azure database directly from your web browser. Also, see firsthand several new enhancements we are adding to SQL Azure based on the feedback we&#x2019;ve received from the community since launching the service earlier this year.</summary> <published>2010-01-25T00:00:00-05:00</published> <updated>2010-03-05T01:07:05-05:00</updated> <author> <name /> </author> <link rel="edit" title="Session" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Speakers" type="application/atom+xml;type=feed" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers"> <m:inline> <feed> <title type="text">Speakers</title> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers</id> <updated>2010-03-25T11:56:06Z</updated> <link rel="self" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers" /> <entry> <id>http://api.visitmix.com/OData.svc/Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')</id> <title type="text">David-Robinson</title> <summary type="text"></summary> <updated>2010-03-25T11:56:06Z</updated> <author> <name>David Robinson</name> </author> <link rel="edit-media" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/$value" /> <link rel="edit" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Sessions" type="application/atom+xml;type=feed" title="Sessions" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/Sessions" /> <category term="EventModel.Speaker" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="image/jpeg" src="http://live.visitmix.com/Content/images/speakers/lrg/default.jpg" /> <m:properties xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"> <d:SpeakerID m:type="Edm.Guid">3395ee85-d994-423c-a726-76b60a896d2a</d:SpeakerID> <d:SpeakerFirstName>David</d:SpeakerFirstName> <d:SpeakerLastName>Robinson</d:SpeakerLastName> <d:LargeImage m:null="true"></d:LargeImage> <d:SmallImage m:null="true"></d:SmallImage> <d:Twitter m:null="true"></d:Twitter> </m:properties> </entry> </feed> </m:inline> </link> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Tags" type="application/atom+xml;type=feed" title="Tags" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Tags" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Files" type="application/atom+xml;type=feed" title="Files" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Files" /> <category term="EventModel.Session" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:SessionID m:type="Edm.Guid">816995df-b09a-447a-9391-019512f643a0</d:SessionID> <d:Location>Breakers L</d:Location> <d:Type>Seminar</d:Type> <d:Code>SVC07</d:Code> <d:StartTime m:type="Edm.DateTime">2010-03-17T12:00:00</d:StartTime> <d:EndTime m:type="Edm.DateTime">2010-03-17T13:00:00</d:EndTime> <d:Slug>SVC07</d:Slug> <d:CreatedDate m:type="Edm.DateTime">2010-01-26T18:14:24.687</d:CreatedDate> <d:SourceID m:type="Edm.Guid">cddca9b7-6830-4d06-af93-5fd87afb67b0</d:SourceID> </m:properties> </content> </entry> I want to print the: Session Title (Building Web Applications with Microsoft SQL Azure) The Author (David Robinson) The Location (Breakers L) And display the speakers image (http://live.visitmix.com/Content/images/speakers/lrg/default.jpg) I presume I can use filegetcontents and then transform to simplexmlstring, but I dont know how to get the deeper items in I want, like Author, and image. Any chance of a bit of coding genius here?

    Read the article

  • Nginx error page with JSON response

    - by Waseem
    I'm trying to serve a maintenance page to clients making request to my application when it is under maintenance. Following is my nginx configuration for that purpose. server { recursive_error_pages on; listen 80; ... if (-f $document_root/maintenance.html) { return 503; } error_page 404 /404.html; error_page 500 502 504 /500.html; error_page 503 @503; location = /404.html { root $document_root; } location = /500.html { root $document_root; } location @503 { error_page 405 =/maintenance.html; if (-f $request_filename) { break; } rewrite ^(.*)$ /maintenance.html break; } } Lets say I have enabled maintenance of my site by creating a $document_root/maintenance.html. This file, correctly, is served when a user makes a request with with Accept header of text/html. $ curl http://server.com/ -i -v -X GET -H "Accept: text/html" * Adding handle: conn: 0xf89420 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0xf89420) send_pipe: 1, recv_pipe: 0 * About to connect() to server.com port 80 (#0) * Trying xxx.xxx.xxx.xxx... * Connected to server.com (xxx.xxx.xxx.xxx) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.33.0 > Host: server.com > Accept: text/html > < HTTP/1.1 503 Service Temporarily Unavailable HTTP/1.1 503 Service Temporarily Unavailable * Server nginx/1.1.19 is not blacklisted < Server: nginx/1.1.19 Server: nginx/1.1.19 < Date: Thu, 14 Nov 2013 11:16:16 GMT Date: Thu, 14 Nov 2013 11:16:16 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 27 Content-Length: 27 < Connection: keep-alive Connection: keep-alive < This is under maintenance. * Connection #0 to host server.com left intact Now some clients set Accept header to application/json. How do I send them a JSON response instead of maintenance.html? Following is the response that I get when setting Accept to application/json. $ curl http://server.com/ -i -v -X GET -H "Accept: application/json" * Adding handle: conn: 0x190c430 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x190c430) send_pipe: 1, recv_pipe: 0 * About to connect() to server.com port 80 (#0) * Trying xxx.xxx.xxx.xxx... * Connected to server.com (xxx.xxx.xxx.xxx) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.33.0 > Host: server.com > Accept: application/json > < HTTP/1.1 503 Service Temporarily Unavailable HTTP/1.1 503 Service Temporarily Unavailable * Server nginx/1.1.19 is not blacklisted < Server: nginx/1.1.19 Server: nginx/1.1.19 < Date: Thu, 14 Nov 2013 11:15:50 GMT Date: Thu, 14 Nov 2013 11:15:50 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 27 Content-Length: 27 < Connection: keep-alive Connection: keep-alive < This is under maintenance. * Connection #0 to host server.com left intact

    Read the article

  • JSF2 and Richfaces 3.3.3 application on tomcat 6.0 crashes with a StackOverflowError

    - by Vivek Madapura V
    Hi, I am using JSF 2 and richfaces 3.3.3 for an application hosted on tomcat 6.0.20. The application crashes as soon as a request is made via the browser (Mozilla and IE). My web.xml looks like this: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>TestJSF</display-name> <welcome-file-list> <welcome-file>pages/login.xhtml</welcome-file> </welcome-file-list> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> <context-param> <description>State saving method: 'client' or 'server' (=default). See JSF Specification 2.5.2</description> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> <context-param> <param-name>javax.faces.DISABLE_FACELET_JSF_VIEWHANDLER</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> <context-param> <param-name>javax.faces.FACELETS_SKIP_COMMENTS</param-name> <param-value>true</param-value> </context-param> <listener> <listener-class>com.sun.faces.config.ConfigureListener</listener-class> </listener> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> </web-app> The exception is javax.servlet.ServletException: Servlet execution threw an exception org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:530) com.sun.faces.context.ExternalContextImpl.dispatch(ExternalContextImpl.java:542) com.sun.faces.application.view.JspViewHandlingStrategy.executePageToBuildView(JspViewHandlingStrategy.java:359) com.sun.faces.application.view.JspViewHandlingStrategy.buildView(JspViewHandlingStrategy.java:150) com.sun.faces.application.view.JspViewHandlingStrategy.renderView(JspViewHandlingStrategy.java:190) com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:127) org.ajax4jsf.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:100) org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:176) com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:117) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:97) com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:135) javax.faces.webapp.FacesServlet.service(FacesServlet.java:309) The stack trace is recursively logged with this until the StackOverflowError occurrs. If I remove all the configurations related to Richfaces, the application works like charm. Any advice is much appreciated.

    Read the article

  • Why is Available Physical Memory (dwAvailPhys) > Available Virtual Memory (dwAvailVirtual) in call G

    - by Andrew
    I am playing with an MSDN sample to do memory stress testing (see: http://msdn.microsoft.com/en-us/magazine/cc163613.aspx) and an extension of that tool that specifically eats physical memory (see http://www.donationcoder.com/Forums/bb/index.php?topic=14895.0;prev_next=next). I am obviously confused though on the differences between Virtual and Physical Memory. I thought each process has 2 GB of virtual memory (although I also read 1.5 GB because of "overhead". My understanding was that some/all/none of this virtual memory could be physical memory, and the amount of physical memory used by a process could change over time (memory could be swapped out to disc, etc.)I further thought that, in general, when you allocate memory, the operating system could use physical memory or virtual memory. From this, I conclude that dwAvailVirtual should always be equal to or greater than dwAvailPhys in the call GlobalMemoryStatus. However, I often (always?) see the opposite. What am I missing. I apologize in advance if my question is not well formed. I'm still trying to get my head around the whole memory management system in Windows. Tutorials/Explanations/Book recs are most welcome! Andrew

    Read the article

  • Courier Maildrop error user unknown. Command output: Invalid user specified

    - by cad
    Hello I have a problem with maildrop. I have read dozens of webs/howto/emails but couldnt solve it. My objective is moving automatically spam messages to a spam folder. My email server is working perfectly. It marks spam in subject and headers using spamassasin. My box has: Ubuntu 9.04 Web: Apache2 + Php5 + MySQL MTA: Postfix 2.5.5 + SpamAssasin + virtual users using mysql IMAP: Courier 0.61.2 + Courier AuthLib WebMail: SquirrelMail I have read that I could use Squirrelmail directly (not a good idea), procmail or maildrop. As I already have maildrop in the box (from courier) I have configured the server to use maildrop (added an entry in transport table for a virtual domain). I found this error in email: This is the mail system at host foo.net I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: user unknown. Command output: Invalid user specified. Final-Recipient: rfc822; [email protected] Action: failed Status: 5.1.1 Diagnostic-Code: x-unix; Invalid user specified. ---------- Forwarded message ---------- From: test <[email protected]> To: [email protected] Date: Sat, 1 May 2010 19:49:57 +0100 Subject: fail fail An this in the logs May 1 18:50:18 foo.net postfix/smtpd[14638]: connect from mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/smtpd[14638]: 8A9E9DC23F: client=mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/cleanup[14643]: 8A9E9DC23F: message-id=<[email protected]> May 1 18:50:19 foo.net postfix/qmgr[14628]: 8A9E9DC23F: from=<[email protected]>, size=1858, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/pickup[14627]: 1D4B4DC2AA: uid=5002 from=<[email protected]> May 1 18:50:23 foo.net postfix/cleanup[14643]: 1D4B4DC2AA: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/pipe[14644]: 8A9E9DC23F: to=<[email protected]>, relay=spamassassin, delay=3.8, delays=0.55/0.02/0/3.2, dsn=2.0.0, status=sent (delivered via spamassassin service) May 1 18:50:23 foo.net postfix/qmgr[14628]: 8A9E9DC23F: removed May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: from=<[email protected]>, size=2173, nrcpt=1 (queue active) **May 1 18:50:23 foo.netpostfix/pipe[14648]: 1D4B4DC2AA: to=<[email protected]>, relay=maildrop, delay=0.22, delays=0.06/0.01/0/0.15, dsn=5.1.1, status=bounced (user unknown. Command output: Invalid user specified. )** May 1 18:50:23 foo.net postfix/cleanup[14643]: 4C2BFDC240: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/qmgr[14628]: 4C2BFDC240: from=<>, size=3822, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/bounce[14651]: 1D4B4DC2AA: sender non-delivery notification: 4C2BFDC240 May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: removed May 1 18:50:24 foo.net postfix/smtp[14653]: 4C2BFDC240: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.211.97]:25, delay=0.91, delays=0.02/0.03/0.12/0.74, dsn=2.0.0, status=sent (250 2.0.0 OK 1272739824 37si5422420ywh.59) May 1 18:50:24 foo.net postfix/qmgr[14628]: 4C2BFDC240: removed My config files: http://lar3d.net/main.cf (/etc/postfix) http://lar3d.net/master.c (/etc/postfix) http://lar3d.net/local.cf (/etc/spamassasin) http://lar3d.net/maildroprc (maildroprc) If I change master.cf line (as suggested here) maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d ${recipient} with maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d vmail ${recipient} I get the email in /home/vmail/MailDir instead of the correct dir (/home/vmail/foo.net/info/.SPAM ) After reading a lot I have some guess but not sure. - Maybe I have to install userdb? - Maybe is something related with mysql, but everything is working ok - If I try with procmail I will face same problem... - What are flags DRhu for? Couldnt find doc about them - In some places I found maildrop line with more parameters flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d $ ${recipient} ${extension} ${recipient} ${user} ${nexthop} ${sender} I am really lost. Dont know how to continue. If you have any idea or need another config file please let me know. Thanks!!!

    Read the article

  • maximum execution time for javascript

    - by Andrew Chang
    I know both ie and firefox have limits for javascript execution here: hxxp://support.microsoft.com/?kbid=175500 Based on number of statements executed, I heard it was 5 million somewhere in ie hxxp://webcache.googleusercontent.com/search?q=cache:_iKHhdfpN-MJ:kb.mozillazine.org/Dom.max_script_run_time+dom.max_script_run_time&hl=en&gl=ca&strip=1 (google cache since site takes forever to load for me) based on number of seconds in firefox, it's 10 seconds by default for my version The thing I don't get is what cases will go over these limits: I'm sure a giant loop will go over the limit for execution time But will an event hander go over the limit, if itself it's execution time is under the limit but if it occurs multiple times? Example: Lets say I have a timer on my page, that executes some javascript every 20 seconds. The execution time for the timer handler is 1 second. Does firefox and ie treat each call of the timer function seperatly, so it never goes over the limit, or is it that firefox/ie adds up the time of each call so after the handler finishes, so after 200 seconds on my site (with the timer called 10 times) an error occurs even though the timer handler itself is only 1 second long?

    Read the article

  • Bare-metal virtualisation for the desktop

    - by Andrew Taylor
    Hi, Does anyone have any knowledge about bare-metal virtualisation products? I'm interested in building a new desktop machine for home, I've been looking at the Intel Quad Core processors and I'd like to put 8GB of RAM in there, but, it got me thinking about making the most out of the available resources. I thought if I could get a good 64bit machine, put some bare-metal virtualisation on, then have a primary system, I'd also be able to bring up some extra virtualised systems as and when I needed. I know most of the bare metal systems are designed for the server market, but, is there anything out there that works well for a desktop. What are the caveats? I presume I won't be able to make the most out of any video cards I could buy, what about just getting a decent screen resolution, will this be a problem? I run a single 24" screen. What about DVD/CD writing, is this possible? I'd like to re-rip my CD collection, I was hoping the quad 64Bit goodness would help me out with the encoding. I currently use a Mac and couldn't go back to windows so that leaves Linux, I was thinking a primary OS of ubuntu. Does this make a difference? Thanks Andrew

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >