Search Results

Search found 20383 results on 816 pages for 'hello all'.

Page 15/816 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Rack middleware deadlock

    - by Joel
    I include this simple Rack Middleware in a Rails application: class Hello def initialize(app) @app = app end def call(env) [200, {"Content-Type" => "text/html"}, "Hello"] end end Plug it in inside environment.rb: ... Dir.glob("#{RAILS_ROOT}/lib/rack_middleware/*.rb").each do |file| require file end Rails::Initializer.run do |config| config.middleware.use Hello ... I'm using Rails 2.3.5, Webrick 1.3.1, ruby 1.8.7 When the application is started in production mode, everything works as expected - every request is intercepted by the Hello middleware, and "Hello" is returned. However, when run in development mode, the very first request works returning "Hello", but the next request hangs. Interrupting webrick while it is in the hung state yields this: ^C[2010-03-24 14:31:39] INFO going to shutdown ... deadlock 0xb6efbbc0: sleep:- - /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:31 deadlock 0xb7d1b1b0: sleep:J(0xb6efbbc0) (main) - /usr/lib/ruby/1.8/webrick/server.rb:113 Exiting /usr/lib/ruby/1.8/webrick/server.rb:113:in `join': Thread(0xb7d1b1b0): deadlock (fatal) from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `each' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:23:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:82:in `start' from /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 Something to do with the class reloader in development mode. There is also mention of deadlock in the exception. Any ideas what might be causing this? Any recommendations as to the best approach to debug this?

    Read the article

  • Segmentation fault with queue in C

    - by Trevor
    I am getting a segmentation fault with the following code after adding structs to my queue. The segmentation fault occurs when the MAX_QUEUE is set high but when I set it low (100 or 200), the error doesn't occur. It has been a while since I last programmed in C, so any help is appreciated. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_QUEUE 1000 struct myInfo { char data[20]; }; struct myInfo* queue; void push(struct myInfo); int queue_head = 0; int queue_size = 0; int main(int argc, char *argv[]) { queue = (struct myInfo*) malloc(sizeof(struct myInfo) * MAX_QUEUE); struct myInfo info; char buf[10]; strcpy(buf, "hello"); while (1) { strcpy(info.data, buf); push(info); } } void push(struct myInfo info) { int next_index = sizeof(struct myInfo) * ((queue_size + queue_head) % MAX_QUEUE); printf("Pushing %s to %d\n", info.data, next_index); *(queue + (next_index)) = info; queue_size++; } Output: Pushing hello to 0 Pushing hello to 20 ... Pushing hello to 7540 Pushing hello to 7560 Pushing hello to 7580 Segmentation fault

    Read the article

  • Java: 2-assignments-2-initializations inside for-loop not allowed?

    - by HH
    $ javac MatchTest.java MatchTest.java:7: ')' expected for((int i=-1 && String match="hello"); (i=text.indexOf(match)+1);) ^ MatchTest.java:7: ';' expected for((int i=-1 && String match="hello"); (i=text.indexOf(match)+1);) ^ MatchTest.java:7: ';' expected for((int i=-1 && String match="hello"); (i=text.indexOf(match)+1);) ^ MatchTest.java:7: not a statement for((int i=-1 && String match="hello"); (i=text.indexOf(match)+1);) ^ MatchTest.java:7: illegal start of expression for((int i=-1 && String match="hello"); (i=text.indexOf(match)+1);) ^ 5 errors $ cat MatchTest.java import java.util.*; import java.io.*; public class MatchTest { public static void main(String[] args){ String text = "hello0123456789hello0123456789hello1234567890hello3423243423232"; for((int i=-1 && String match="hello"); (i=text.indexOf(match)+1);) System.out.println(i); } }

    Read the article

  • Compile Assembly Output generated by VC++?

    - by SDD
    I have a simple hello world C program and compile it with /FA. As a consequence, the compiler also generates the corresponding assembly listing. Now I want to use masm/link to assemble an executable from the generated .asm listing. The following command line yields 3 linker errors: \masm32\bin\ml /I"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include" /c /coff asm_test.asm \masm32\bin\link /SUBSYSTEM:CONSOLE /LIBPATH:"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\lib" asm_test.obj indicating that the C-runtime functions were not linked to the object files produced earlier: asm_test.obj : error LNK2001: unresolved external symbol @__security_check_cookie@4 asm_test.obj : error LNK2001: unresolved external symbol _printf LINK : error LNK2001: unresolved external symbol _wmainCRTStartup asm_test.exe : fatal error LNK1120: 3 unresolved externals Here is the generated assembly listing ; Listing generated by Microsoft (R) Optimizing Compiler Version 15.00.30729.01 TITLE c:\asm_test\asm_test\asm_test.cpp .686P .XMM include listing.inc .model flat INCLUDELIB OLDNAMES PUBLIC ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ ; `string' EXTRN @__security_check_cookie@4:PROC EXTRN _printf:PROC ; COMDAT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ CONST SEGMENT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ DB 'hello world!', 0aH, 00H ; `string' CONST ENDS PUBLIC _wmain ; Function compile flags: /Ogtpy ; COMDAT _wmain _TEXT SEGMENT _argc$ = 8 ; size = 4 _argv$ = 12 ; size = 4 _wmain PROC ; COMDAT ; File c:\users\octon\desktop\asm_test\asm_test\asm_test.cpp ; Line 21 push OFFSET ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ call _printf add esp, 4 ; Line 22 xor eax, eax ; Line 23 ret 0 _wmain ENDP _TEXT ENDS END I am using the latest masm32 version (6.14.8444).

    Read the article

  • The new operator in C# isn't overriding base class member

    - by Dominic Zukiewicz
    I am confused as to why the new operator isn't working as I expected it to. Note: All classes below are defined in the same namespace, and in the same file. This class allows you to prefix any content written to the console with some provided text. public class ConsoleWriter { private string prefix; public ConsoleWriter(string prefix) { this.prefix = prefix; } public void Write(string text) { Console.WriteLine(String.Concat(prefix,text)); } } Here is a base class: public class BaseClass { protected static ConsoleWriter consoleWriter = new ConsoleWriter(""); public static void Write(string text) { consoleWriter.Write(text); } } Here is an implemented class: public class NewClass : BaseClass { protected new static ConsoleWriter consoleWriter = new ConsoleWriter("> "); } Now here's the code to execute this: class Program { static void Main(string[] args) { BaseClass.Write("Hello World!"); NewClass.Write("Hello World!"); Console.Read(); } } So I would expect the output to be Hello World! > Hello World! But the output is Hello World Hello World I do not understand why this is happening. Here is my thought process as to what is happening: The CLR calls the BaseClass.Write() method The CLR initialises the BaseClass.consoleWriter member. The method is called and executed with the BaseClass.consoleWriter variable Then The CLR calls the NewClass.Write() The CLR initialises the NewClass.consoleWriter object. The CLR sees that the implementation lies in BaseClass, but the method is inherited through The CLR executes the method locally (in NewClass) using the NewClass.consoleWriter variable I thought this is how the inheritance structure works? Please can someone help me understand why this is not working?

    Read the article

  • Uniform grid Rows and Columns

    - by Carlo
    I'm trying to make a grid based on a UniformGrid to show the coordinates of each cell, and I want to show the values on the X and Y axes like so: _A_ _B_ _C_ _D_ 1 |___|___|___|___| 2 |___|___|___|___| 3 |___|___|___|___| 4 |___|___|___|___| Anyway, in order to do that I need to know the number of columns and rows in the Uniform grid, and I tried overriding the 3 most basic methods where the arrangement / drawing happens, but the columns and rows in there are 0, even though I have some controls in my grid. What method can I override so my Cartesian grid knows how many columns and rows it has? C#: public class CartesianGrid : UniformGrid { protected override Size MeasureOverride(Size constraint) { Size size = base.MeasureOverride(constraint); int computedColumns = this.Columns; // always 0 int computedRows = this.Rows; // always 0 return size; } protected override Size ArrangeOverride(Size arrangeSize) { Size size = base.ArrangeOverride(arrangeSize); int computedColumns = this.Columns; // always 0 int computedRows = this.Rows; // always 0 return size; } protected override void OnRender(DrawingContext dc) { int computedColumns = this.Columns; // always 0 int computedRows = this.Rows; // always 0 base.OnRender(dc); } } XAML: <local:CartesianGrid> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> </local:CartesianGrid> Any help is greatly appreciated. Thanks!

    Read the article

  • Why I'm not getting "Multiple definition" error from the g++?

    - by ban
    I tried to link my executable program with 2 static libraries using g++. The 2 static libraries have the same function name. I'm expecting a "multiple definition" linking error from the linker, but I did not received. Can anyone help to explain why is this so? staticLibA.h #ifndef _STATIC_LIBA_HEADER #define _STATIC_LIBA_HEADER int hello(void); #endif staticLibA.cpp #include "staticLibA.h" int hello(void) { printf("\nI'm in staticLibA\n"); return 0; } output: g++ -c -Wall -fPIC -m32 -o staticLibA.o staticLibA.cpp ar -cvq ../libstaticLibA.a staticLibA.o a - staticLibA.o staticLibB.h #ifndef _STATIC_LIBB_HEADER #define _STATIC_LIBB_HEADER int hello(void); #endif staticLibB.cpp #include "staticLibB.h" int hello(void) { printf("\nI'm in staticLibB\n"); return 0; } output: g++ -c -Wall -fPIC -m32 -o staticLibB.o staticLibB.cpp ar -cvq ../libstaticLibB.a staticLibB.o a - staticLibB.o main.cpp extern int hello(void); int main(void) { hello(); return 0; } output: g++ -c -o main.o main.cpp g++ -o multipleLibsTest main.o -L. -lstaticLibA -lstaticLibB -lstaticLibC -ldl -lpthread -lrt

    Read the article

  • How to control div on hover:?

    - by AAA
    I found a way to change the background color of a menu option upon hover. However, when you hover an option, it takes up some wide space that moves all the other options to the right, its sort of annoying, i want to maintain a consistent space, so if i hover, only the color should change, not the option moving to the right. Sort of the way facebook has its menu options. Below is the code: <div id="menu"> <a href="/hello" id="option">home</a> <a href="/hello" id="option">profile</a> <a href="/hello" id="option">account</a> <a href="/hello" id="option">settings</a> <a href="/hello" id="option">extra</a> <a href="/hello" id="option">logout</a> </div> CSS: div#menu { margin-left: 630px; margin-top:-20px; } option { margin-left: 20px; } #option:hover{ background: #3F2327; padding: 10px; }

    Read the article

  • Getting Started With Sinatra

    - by Liam McLennan
    Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.

    Read the article

  • Mismatch between the program and library build versions detected

    - by Alex Farber
    I built wxWidgets on Linux using this command: ../configure --enable-shared --disable-debug It see results of this build: /usr/local/lib/wx/config/gtk2-ansi-release-2.8 /usr/local/lib/wx/include/gtk2-ansi-release-2.8/wx/setup.h wx-config output: alex@alex-linux:~$ wx-config --list Default config is gtk2-ansi-release-2.8 Default config will be used for output Alternate matches: gtk2-ansi-debug-2.8 gtk2-ansi-debug-static-2.8 gtk2-ansi-release-static-2.8 alex@alex-linux:~$ wx-config --cppflags --release 2.8 -I/usr/local/lib/wx/include/gtk2-ansi-release-2.8 -I/usr/local/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ alex@alex-linux:~$ wx-config --libs --release 2.8 -L/usr/local/lib -pthread -lwx_gtk2_richtext-2.8 -lwx_gtk2_aui-2.8 -lwx_gtk2_xrc-2.8 -lwx_gtk2_qa-2.8 -lwx_gtk2_html-2.8 -lwx_gtk2_adv-2.8 -lwx_gtk2_core-2.8 -lwx_base_xml-2.8 -lwx_base_net-2.8 -lwx_base-2.8 Now I am trying to build Hello wxWidgets program with Release version: g++ -I/usr/local/lib/wx/include/gtk2-ansi-release-2.8 -I/usr/local/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ hello.cpp -o hello -L/usr/local/lib -pthread -lwx_gtk2_richtext-2.8 -lwx_gtk2_aui-2.8 -lwx_gtk2_xrc-2.8 -lwx_gtk2_qa-2.8 -lwx_gtk2_html-2.8 -lwx_gtk2_adv-2.8 -lwx_gtk2_core-2.8 -lwx_base_xml-2.8 -lwx_base_net-2.8 -lwx_base-2.8 It compiles and runs successfully on my computer. Program dependencies: ldd hello linux-gate.so.1 = (0x006ef000) libwx_gtk2_richtext-2.8.so.0 = /usr/local/lib/libwx_gtk2_richtext-2.8.so.0 (0x00253000) libwx_gtk2_aui-2.8.so.0 = /usr/local/lib/libwx_gtk2_aui-2.8.so.0 (0x005ff000) libwx_gtk2_xrc-2.8.so.0 = /usr/local/lib/libwx_gtk2_xrc-2.8.so.0 (0x00110000) libwx_gtk2_qa-2.8.so.0 = /usr/local/lib/libwx_gtk2_qa-2.8.so.0 (0x00a3c000) libwx_gtk2_html-2.8.so.0 = /usr/local/lib/libwx_gtk2_html-2.8.so.0 (0x0019d000) libwx_gtk2_adv-2.8.so.0 = /usr/local/lib/libwx_gtk2_adv-2.8.so.0 (0x00c18000) libwx_gtk2_core-2.8.so.0 = /usr/local/lib/libwx_gtk2_core-2.8.so.0 (0x00ef8000) libwx_base_xml-2.8.so.0 = /usr/local/lib/libwx_base_xml-2.8.so.0 (0x0047e000) libwx_base_net-2.8.so.0 = /usr/local/lib/libwx_base_net-2.8.so.0 (0x00353000) libwx_base-2.8.so.0 = /usr/local/lib/libwx_base-2.8.so.0 (0x006f0000) ... Now I want to execute this program on another computer without wxWidgets installed. I copy the program and all shared libraries to another computer: hello libwx_gtk2_core-2.8.so libwx_base-2.8.so libwx_gtk2_core-2.8.so.0 libwx_base-2.8.so.0 libwx_gtk2_core-2.8.so.0.6.0 libwx_base-2.8.so.0.6.0 libwx_gtk2_html-2.8.so libwx_base_net-2.8.so libwx_gtk2_html-2.8.so.0 libwx_base_net-2.8.so.0 libwx_gtk2_html-2.8.so.0.6.0 libwx_base_net-2.8.so.0.6.0 libwx_gtk2_qa-2.8.so libwx_base_xml-2.8.so libwx_gtk2_qa-2.8.so.0 libwx_base_xml-2.8.so.0 libwx_gtk2_qa-2.8.so.0.6.0 libwx_base_xml-2.8.so.0.6.0 libwx_gtk2_richtext-2.8.so libwx_gtk2_adv-2.8.so libwx_gtk2_richtext-2.8.so.0 libwx_gtk2_adv-2.8.so.0 libwx_gtk2_richtext-2.8.so.0.6.0 libwx_gtk2_adv-2.8.so.0.6.0 libwx_gtk2_xrc-2.8.so libwx_gtk2_aui-2.8.so libwx_gtk2_xrc-2.8.so.0 libwx_gtk2_aui-2.8.so.0 libwx_gtk2_xrc-2.8.so.0.6.0 libwx_gtk2_aui-2.8.so.0.6.0 And run it: LD_LIBRARY_PATH=. ./hello Result: Fatal Error: Mismatch between the program and library build versions detected. The library used 2.8 (debug,ANSI,compiler with C++ ABI 1002,wx containers,compatible with 2.6), and your program used 2.8 (no debug,ANSI,compiler with C++ ABI 1002,wx containers,compatible with 2.6). ./run.sh: line 1: 1810 Aborted LD_LIBRARY_PATH=. ./hello What is wrong?

    Read the article

  • Powershell $LastExitCode=0 but $?=False . Redirecting stderr to stdout gives NativeCommandError

    - by Colonel Panic
    Can anyone explain Powershell's surprising behaviour in the second example below? First, a example of sane behaviour: PS C:\> & cmd /c "echo Hello from standard error 1>&2"; echo "`$LastExitCode=$LastExitCode and `$?=$?" Hello from standard error $LastExitCode=0 and $?=True No surprises. I print a message to standard error (using cmd's echo). I inspect the variables $? and $LastExitCode. They equal to True and 0 respectively, as expected. However, if I ask Powershell to redirect standard error to standard output over the first command, I get a NativeCommandError: PS C:\> & cmd /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" cmd.exe : Hello from standard error At line:1 char:4 + cmd <<<< /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" + CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError $LastExitCode=0 and $?=False My first question, why the NativeCommandError ? Secondly, why is $? False when cmd ran successfully and $LastExitCode is 0? Powershell's docs about_Automatic_Variables don't explicitly define $?. I always supposed it is True if and only if $LastExitCode is 0 but my example contradicts that. Here's how I came across this behaviour in the real-world (simplified). It really is FUBAR. I was calling one Powershell script from another. The inner script: cmd /c "echo Hello from standard error 1>&2" if (! $?) { echo "Job failed. Sending email.." exit 1 } # do something else Running this simply .\job.ps1, it works fine, no email is sent. However, I was calling it from another Powershell script, logging to a file .\job.ps1 2>&1 > log.txt. In this case, an email is sent! Here, the act of observing a phenomenon changes its outcome. This feels like quantum physics rather than scripting! [Interestingly: .\job.ps1 2>&1 may or not blow up depending on where you run it]

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    Hello there, We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log About to connect() to api.paypal.com port 443 (#0) Trying 66.211.168.123... * connected Connected to api.paypal.com (66.211.168.123) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • curl FTPS with client certificate to a vsftpd

    - by weeheavy
    I'd like to authenticate FTP clients either via username+password or a client certificate. Only FTPS is allowed. User/password works, but while testing with curl (I don't have another option) and a client certificate, I need to pass a user. Isn't it technically possible to authenticate only by providing a certificate? vsftpd.conf passwd_chroot_enable=YES chroot_local_user=YES ssl_enable=YES rsa_cert_file=usrlocal/ssl/certs/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES Tested with curl -v -k -E client-crt.pem --ftp-ssl-reqd ftp://server:21/testfile the output is: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS handshake, CERT verify (15): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DES-CBC3-SHA * Server certificate: * SSL certificate verify result: self signed certificate (18), continuing anyway. > USER anonymous < 530 Anonymous sessions may not use encryption. * Access denied: 530 * Closing connection #0 * SSLv3, TLS alert, Client hello (1): curl: (67) Access denied: 530 This is theoretically ok, as i forbid anonymous access. If I specify a user with -u username:pass it works, but it would without a certificate too. The client certificate seems to be ok, it looks like this: client-crt.pem -----BEGIN RSA PRIVATE KEY----- content -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- content -----END CERTIFICATE----- What am I missing? Thanks in advance. (The OS is Solaris 10 SPARC).

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • How to set mod_rewrite in WAMP?

    - by Martin Jenseb
    I learn Symfony2 and i have: http://symfony.com/doc/current/quick_tour/the_big_picture.html http://localhost/Symfony/web/app.php/demo/hello/Fabien And if you use Apache with mod_rewrite enabled, you can even omit the app.php part of the URL: http://localhost/Symfony/web/demo/hello/Fabien Last but not least, on the production servers, you should point your web root directory to the web/ directory to secure your installation and have an even better looking URL: http://localhost/demo/hello/Fabien how can i make this in WAMP Server?

    Read the article

  • .php files see Apache environment variables, .html files don't

    - by dotancohen
    Consider this environment: $ cat .htaccess AddType application/x-httpd-php .php .html SetEnv Foo Bar $ cat test.php <?php echo "Hello: "; echo $_SERVER['Foo']; echo $_ENV['Foo']; echo getenv('Foo'); ?> $ cat test.html <?php echo "Hello: "; echo $_SERVER['Foo']; echo $_ENV['Foo']; echo getenv('Foo'); ?> This is the output of test.php: Hello BarBarBar This is the output of test.html: Hello Why might that be? How might I fix it? Here is phpinfo.php: http://pastebin.com/rgq7up61 Here is phpinfo.html: http://pastebin.com/VUKFNZ36 If anyone knows where I can host a real webpage instead of just the HTML for one, please let me know and I'll move the content to there. Thanks.

    Read the article

  • How come my Apache can't read my media folder, but it can load the site? (static files don't work)

    - by Alex
    Alias /media/ /home/matt/repos/hello/media <Directory /home/matt/repos/hello/media> Options -Indexes Order deny,allow Allow from all </Directory> WSGIScriptAlias / /home/matt/repos/hello/wsgi/django.wsgi /media is my directory. When I go to mydomain.com/media/, it says 403 Forbidden. And, the rest of my site doesn't work because all static files are 404s. Why? The page loads. Just not the media folder. Edit: hello is my project folder. I have tried 777 all my permissions of that folder.

    Read the article

  • echo newline character not working in bash

    - by Bashuser
    I have bash script which has lots of echo statements and also I aliased echo to echo -e both in .bash_profile and .bashrc, so that new lines are printed properly for a statement like echo 'Hello\nWorld' the output should be Hello World but the output I am getting is Hello\nWorld I even tried using shopt -s expand_aliases in the script, it doesn't help I am running my script as bash /scripts/scriptnm.sh; if I run it as . /scripts/scriptnm.sh I am getting the desired output...

    Read the article

  • Why does Tomcat try to use the cache when compilation failed?

    - by etheros
    For some reason, it appears Tomcat is trying to hit its compilation cache when compilation failed. For example, if I create a JSP containing nothing but Hello, <%=world%>!, predictably, I get an error: org.apache.jasper.JasperException: Unable to compile class for JSP. Subsequent requests however alternate between this and org.apache.jasper.JasperException: org.apache.jasper.JasperException: Unable to load class for JSP. Further, if I create a JSP containing Hello!, it of course works just fine. If I modify it contain Hello, <%=name%>!, the response alternates between the previously-mentioned compilation error, and the cached Hello!. What's going on?

    Read the article

  • How do I initialize a Scala map with more than 4 initial elements in Java?

    - by GlenPeterson
    For 4 or fewer elements, something like this works (or at least compiles): import scala.collection.immutable.Map; Map<String,String> HAI_MAP = new Map4<>("Hello", "World", "Happy", "Birthday", "Merry", "XMas", "Bye", "For Now"); For a 5th element I could do this: Map<String,String> b = HAI_MAP.$plus(new Tuple2<>("Later", "Aligator")); But I want to know how to initialize an immutable map with 5 or more elements and I'm flailing in Type-hell. Partial Solution I thought I'd figure this out quickly by compiling what I wanted in Scala, then decompiling the resultant class files. Here's the scala: object JavaMapTest { def main(args: Array[String]) = { val HAI_MAP = Map(("Hello", "World"), ("Happy", "Birthday"), ("Merry", "XMas"), ("Bye", "For Now"), ("Later", "Aligator")) println("My map is: " + HAI_MAP) } } But the decompiler gave me something that has two periods in a row and thus won't compile (I don't think this is valid Java): scala.collection.immutable.Map HAI_MAP = (scala.collection.immutable.Map) scala.Predef..MODULE$.Map().apply(scala.Predef..MODULE$.wrapRefArray( scala.Predef.wrapRefArray( (Object[])new Tuple2[] { new Tuple2("Hello", "World"), new Tuple2("Happy", "Birthday"), new Tuple2("Merry", "XMas"), new Tuple2("Bye", "For Now"), new Tuple2("Later", "Aligator") })); I'm really baffled by the two periods in this: scala.Predef..MODULE$ I asked about it on #java on Freenode and they said the .. looked like a decompiler bug. It doesn't seem to want to compile, so I think they are probably right. I'm running into it when I try to browse interfaces in IntelliJ and am just generally lost. Based on my experimentation, the following is valid: Tuple2[] x = new Tuple2[] { new Tuple2<String,String>("Hello", "World"), new Tuple2<String,String>("Happy", "Birthday"), new Tuple2<String,String>("Merry", "XMas"), new Tuple2<String,String>("Bye", "For Now"), new Tuple2<String,String>("Later", "Aligator") }; scala.collection.mutable.WrappedArray<Tuple2> y = scala.Predef.wrapRefArray(x); There is even a WrappedArray.toMap() method but the types of the signature are complicated and I'm running into the double-period problem there too when I try to research the interfaces from Java.

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >