Search Results

Search found 134 results on 6 pages for 'pavel vlasov'.

Page 2/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • ffmpeg open webcam using YUYV but i want MJPEG

    - by Pavel
    I need ffmpeg to open webcam (logitech c910) in MJPEG mode, because the webcam can give ~24 using MJPEG "protocol" and only ~10 fps using the YUYV. Can i choose between them using ffmpeg command line? xx@(none) ~ $ v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG My current command line: ffmpeg -y -f alsa -i hw:3,0 -f video4linux2 -r 20 -s 1280x720 -i /dev/video0 -acodec libfaac -ab 128k -vcodec libx264 /tmp/web.avi ffmpeg produces corrupted h264 stream when i record from webcam, but normal h264 strem when i record from x11grab. Another codecs (mjpeg, mpeg4) works well with webcam... But this is another story.

    Read the article

  • Problem with monitoring Glassfish with JConsole..

    - by Pavel
    I have enabled JMX connection on remote Glassfish server and then I've restarted it. During starting server notified: Standard JMX Clients (like JConsole) can connect to JMXServiceURL: [service:jmx:rmi:///jndi/rmi://myserver:8686/jmxrmi] for domain management purposes. Port 8686 is opened for connections. But I can't connect to server with JConsole.. It says: Connection failed. How can I solve this problem? Thanks in advance.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • OpenLDAP replication fails, "syncrepl_entry: rid=666 be_modify failed (20)"

    - by Pavel
    I've configured a second host to replicate the main LDAP server via syncrepl in the slapd.conf: syncrepl rid=666 provider=ldaps://my-main-server.com type=refreshAndPersist searchBase="dc=Staff,dc=my-main-server,dc=com" filter="(objectClass=*)" scope=sub schemachecking=off bindmethod=simple binddn="cn=repadmin,dc=my-main-server,dc=com" credentials=mypassword When I restart slapd, it writes to /var/log/debug Jun 11 15:48:33 cluster-mn-04 slapd[29441]: @(#) $OpenLDAP: slapd 2.4.9 (Mar 31 2009 07:18:37) $ ^Ibuildd@yellow:/build/buildd/openldap2.3-2.4.9/debian/build/servers/slapd Jun 11 15:48:34 cluster-mn-04 slapd[29442]: slapd starting Jun 11 15:48:34 cluster-mn-04 slapd[29442]: null_callback : error code 0x14 Jun 11 15:48:34 cluster-mn-04 slapd[29442]: syncrepl_entry: rid=666 be_modify failed (20) Jun 11 15:48:34 cluster-mn-04 slapd[29442]: do_syncrepl: rid=666 quitting I've looked into the sources for the return code and found only #define LDAP_TYPE_OR_VALUE_EXISTS 0x14 in include/ldap.h. Anyway, I don't quite get what the error message means. Can you help me debugging this problem and figure out why the LDAP replication doesn't work? I've managed to put a "manual" copy via slapcat and slapadd into the database, but I'd like to sync automatically. UPDATE: "Solved" by removing /var/lib/ldap/* and re-importing the database with slapadd.

    Read the article

  • Why does NX Client for Windows silently closes after connection?

    - by pavel
    Hey! I connect remotely to my Ubuntu server from Vista machine. Now I need to run a GUI application on the server (Wireshark). So I decided to use FreeNX server/client to view Ubuntu GUI on Vista I have successfully installed FreeNX on Ubuntu and NX Client on Vista. I was following this guide Unfortunately, now I found myself stuck with the following problem. At the client, the !M logo window appears, but after a few seconds that window just closes, even without showing any error message. Guys, I'm really stuck, please help! Maybe I should have installed some graphical environment on the server? These are the details from NX client, it seems there are no errors. ----------------- Info: Display running with pid '7768' and handler '0x670d24'. NXPROXY - Version 3.4.0 Copyright (C) 2001, 2007 NoMachine. See http://www.nomachine.com/ for more information. Info: Proxy running in client mode with pid '2168'. Session: Starting session at 'Sat Dec 19 10:58:35 2009'. Warning: Connected to remote version 3.3.0 with local version 3.4.0. Info: Connection with remote proxy completed. Info: Using WAN link parameters 768/24/1/0. Info: Using cache parameters 4/4096KB/16384KB/16384KB. Info: Using pack method 'adaptive-9' with session 'kde'. Info: Using ZLIB data compression 1/1/32. Info: Using ZLIB stream compression 1/1. Info: No suitable cache file found. Info: Forwarding X11 connections to display ':0'. Info: Listening to font server connections on port '11000'. Session: Session started at 'Sat Dec 19 10:58:35 2009'. Info: Established X server connection. Info: Using shared memory parameters 0/0K. Session: Terminating session at 'Sat Dec 19 10:58:37 2009'. Session: Session terminated at 'Sat Dec 19 10:58:37 2009'. -----------

    Read the article

  • Creating zoom-pan video from a picture in Linux

    - by Pavel
    I would like to make a six second video using six images. Each second is sliding over one image from its top to its botom. Or some other motion effect – I would like to try several. I tried kdenlive Imagination Videoporama PhotoFilmStrip The first one has not enough settings (don't remember what exactly) and all those have rather poor quality – the resized picture is very "aliased" (like no quadratic filter was applied during resizing).

    Read the article

  • ssh hangs on "Last login" line

    - by Pavel H
    This happened for the first time three days ago - I ssh to the server, authenticate using a password, get the welcome message but it remains hanging on the "Last login:..." line. The command line doesn't show and the server doesn't react to my input. Other services on the server keep working ok (apache, tomcat, database, ..). The box has an out-of-band management using which I was able to restart it. After the restart the ssh worked ok again and I didn't find anything suspicious in the logs. Three days later the same problem occurs on this box again, and newly on yet another server in the cluster - 100% same symptoms. Both servers have about 2 month old installation of Debian Squeeze (6.0.2) and the problem never occurred before despite frequent ssh-ing, so it should not be a problem of settings. We haven't been installing anything new for quite some time now. I also made sure there is enough disk space on both servers. Since it started to happen all of a sudden on two servers at about the same time, I suspect some bug may have been introduced via Debian updates, yet I haven't been able to find anyone with the same problem. Most similar issues I have found: ssh freezes at the "Last Login Line" - in our case everything worked fine until recently, so nothing related to settings should be our problem. Diskspace checked, I couldn't check the memory but I would expect something would be in the logs if the system had been running out of it. Remote Fedora system unresponsive, odd but consistent behavior when trying to log in - problem with high load on the server; unlike in this case, nothing changes even if I wait for 10+ minutes

    Read the article

  • How to change the mail domain server so it's not displaying IP? Changing [email protected] to [email protected]

    - by Pavel
    Hi guys. I'm kinda a noob as a server admin so please bear with me. I've installed postfix mail server and everything is working fine but the 'from' box is displaying [email protected]. I want to set it up so it displays domainname.com instead of IP. I just hope you know what I mean. My main.cf in postfix folder looks like this: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.thevinylfactory alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.thevinylfactory.com, thevinylfactory, localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all Can anyone help me with this one? If you need any more details please let me know. Thanks in advance!

    Read the article

  • How do I send e-mails with attachments to a Microsoft WebTV user?

    - by Petr 'PePa' Pavel
    my friend uses Microsoft WebTV (e-mail address ends with @webtv.net) and I'd like to send him an e-mail with a picture attached to it. We went through a series of attempts one of which ended up a success, all others a failure. He just can't see my e-mail in his mailbox, when it contains an attachment. E-mails without attachments always go through all right. What seemed to help in the first successful case, was that he added my e-mail address to his address book and my e-mail suddenly showed up. Seemed to have been delivered before but hidden. He kept my address in his address book however, it didn't help with the following trials. He did look into his junk folder, nothing there. I made sure the file name contains no spaces. It's a regular jpeg, named something-like-this.jpg I downsized it to have only about 50k, as I've read somewhere that that's a limit. I actually doubt this piece of information, because I think the successful attempt was larger. webtv.net contains zero information. I watched their video demo for the e-mail client, so I at least know how the user interface looks like. I've never laid my hands on the real thing. I'm an advanced user myself (a programmer) but I can't wrap my mind around this. He on the other hand, is a very technically inexperienced user and because he's half way across the globe, I can't come and look over his shoulder. He doesn't have a computer, afaik there's no way I could see what he sees. Any ideas on how to debug this? Thanks for your time, guys. P.S. I can't tag this "webtv" because such tag doesn't exist yet and my reputation is too low, sorry.

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • Network configuration with limited hardware

    - by Pavel
    I have cable internet and I have two PC. One of them has wireless and ethernet, the other pc has ethernet only. Also, I have a dsl modem that has wireless builtin (some common speedstream model). My goal is to connect all of them so that both have internet access. As I understand, I have two options: connect speedstream to the cable and then connect other PC's to speedstream; OR, connect pc with wireless to the cable and then connect the other pc that has ethernet only using ethernet+wireless through speestream. First one seems to be easy, but it doesn't seem to work. What about the other choice? Is it possible to do it?

    Read the article

  • UVC device Logitech WebCam 9000 pro

    - by Pavel
    There is such a good webcam in universe that acts as a "USB Video Class" (UVC - video USB standart interface) - the logitech webcam 9000. UVC offer unified interface allowing to control it or grab a picture from it by any UVC-driver. You need one universal driver and you support all the UVC devices (webcams, video-cameras, video-grabbing-cards etc). For example, in linux - if you have UVC driver - you don't need to think about specific webcam driver for UVC webcam. UVC has unified way that webcam transmit its awailable resolutions and other capabilities, so i see 1600x1200 resolution without any problem. I wonder if windows 7 has UVC. I mean "universal UVC" (-; It says "USB Video Class", but doesn't give resolutions larger than 640x480 and webcam's controls, like 'sharpness', 'focus' and other as linux's driver does...

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • CreateProcessAsUser doesn't draw the GUI

    - by pavel.tuzov
    Hi, I have a windows service running under "SYSTEM" account that checks if a specific application is running for each logged in user. If the application is not running, the service starts it (under corresponding user name). I'm trying to accomplish my goal using CreateProcessAsUser(). The service does start the application under corresponding user name, but the GUI is not drawn. (Yes, I'm making sure that "Allow service to interact with desktop" check box is enabled). System: XP SP3, labguage: C# Here is some code that might be of interest: PROCESS_INFORMATION processInfo = new PROCESS_INFORMATION(); startInfo.cb = Marshal.SizeOf(startInfo); startInfo.lpDesktop = "winsta0\\default"; bResult = Win32.CreateProcessAsUser(hToken, null, strCommand, IntPtr.Zero, IntPtr.Zero, false, 0, IntPtr.Zero, null, ref startInfo, out processInfo); As far as I understand, setting startInfo.lpDesktop = "winsta0\default"; should have used the desktop of corresponding user. Even contrary to what is stated here: http://support.microsoft.com/kb/165194, I tried setting lpDesktop to null, or not setting it at all, both giving the same result: process was started in the name of expected user and I could see a part of window's title bar. The "invisible" window intercepts mouse click events, handles them as expected. It just doesn't draw itself. Is anyone familiar with such a problem and knows what am I doing wrong?

    Read the article

  • Use XMPP instead of HTTP

    - by pavel
    Hey guys. My friend and I, we are working on iPhone application. This application uses XMPP protocol to provide chat functionality. Right now we are designing architecture for the application. So my friend is working on iPhone side, and I am ruby on rails guy. My friend suggested, that we wrap every call, that is usually served via HTTP into XMPP. So, user registration, users search, profile editing, photo uploading, everything goes via XMPP. No HTTP at all. My friend wants to use XMPP, because he says, that it's much easier to implement XMPP on client-side rather HTTP. As for me, this is bullshit, but we've got a product owner, who have been working with my friend for a long time and he trusts him. So what I'm trying to do is to convince my friend and product owner that using XMPP for what HTTP can work find — is totally not the best idea. I feel, that if we implement everything on XMPP, we will have a pain in an ass till the end of lives. But how do I prove it? P.S. I'm not against chat over XMPP, I am against users search, photo uploading, rankings, nearby search and various other restful requests. Please, leave response. Any help appreciated. A little update: Yesterday we had a long discussion. And it turns out, it's quiete hard to receive response from both XMPP and HTTP in Objective-C. Because every single object and its data should be stored in Core Data model, while this model can't be securely modified from various places. Say, if you use HTTP transport, you always want to use only HTTP transport to update data in your model. And if you use XMPP, you should always use XMPP. So, you can't use both. That's what my iPhone buddy told me. It sounds weird for me, can anyone explain me that?

    Read the article

  • scalacheck/scalatest not found: how to add it in sbt/scala?

    - by Pavel Reich
    I've installed typesafe-stack from http://typesafe.com/stack/download on my ubuntu12, than I created a play project (g8 typesafehub/play-scala) and now I want to add scalatest or scalacheck to my project. So my_app/project/plugins.sbt has the following lines: // The Typesafe repository resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/" // Use the Play sbt plugin for Play projects addSbtPlugin("play" % "sbt-plugin" % "2.0.1") Then I added scalatest using addSbtPlugin: addSbtPlugin("org.scalatest" %% "scalatest" % "2.0.M1" % "test") and now it fails with the following message when I run 'sbt test' [info] Resolving org.scalatest#scalatest;2.0.M1 ... [warn] module not found: org.scalatest#scalatest;2.0.M1 [warn] ==== typesafe-ivy-releases: tried [warn] http://repo.typesafe.com/typesafe/ivy-releases/org.scalatest/scalatest/scala_2.9.1/sbt_0.11.3/2.0.M1/ivys/ivy.xml [warn] ==== local: tried [warn] ~/.ivy2/local/org.scalatest/scalatest/scala_2.9.1/sbt_0.11.3/2.0.M1/ivys/ivy.xml [warn] ==== Typesafe repository: tried [warn] http://repo.typesafe.com/typesafe/releases/org/scalatest/scalatest_2.9.1_0.11.3/2.0.M1/scalatest-2.0.M1.pom [warn] ==== typesafe-ivy-releases: tried [warn] http://repo.typesafe.com/typesafe/ivy- releases/org.scalatest/scalatest/scala_2.9.1/sbt_0.11.3/2.0.M1/ivys/ivy.xml [warn] ==== public: tried [warn] http://repo1.maven.org/maven2/org/scalatest/scalatest_2.9.1_0.11.3/2.0.M1/scalatest-2.0.M1.pom What I don't understand: why does it use this http://repo.typesafe.com/typesafe/releases/org/scalatest/scalatest_2.9.1_0.11.3/2.0.M1/scalatest-2.0.M1.pom URL instead of the real one http://repo.typesafe.com/typesafe/releases/org/scalatest/scalatest_2.9.1/2.0.M1/scalatest_2.9.1-2.0.M1.pom? Quite the same problem I have with scalacheck: it also tries to download using sbt-version specific artifactId whereas the repository has only scala-version specific. What am I doing wrong? I understand there must be a switch in sbt somewhere, not to use sbt-version as part of the artifact URL? I also tried using this in my plugins.sbt libraryDependencies += "org.scalatest" %% "scalatest" % "2.0.M1" % "test" but looks like it is completely ignored by sbt and scalatest.jar hasn't appeared in the classpath: my_app/test/AppTest.scala:1: object scalatest is not a member of package org [error] import org.scalatest.FunSuite because the output of sbt clean && sbt test has lots of Resolving org.easytesting#fest-util;1.1.6 or just another library, but nothing about scalatest. I use scala 2.9.1 and sbt 0.11.3, trying to use scalatest 2.0.M1 and 1.8; scalacheck: resolvers ++= Seq( "snapshots" at "http://oss.sonatype.org/content/repositories/snapshots", "releases" at "http://oss.sonatype.org/content/repositories/releases" ) libraryDependencies ++= Seq( "org.scalacheck" %% "scalacheck" % "1.9" % "test" ) With the same outcome, i.e. it uses the sbtVersion specific POM URL, which doesn't exist. What am I doing wrong? Thanks.

    Read the article

  • IPhone app with SSL client certs

    - by Pavel Georgiev
    I'm building an iphone app that needs to access a web service over https using client certificates. If I put the client cert (in pkcs12 format) in the app bundle, I'm able to load it into the app and make the https call (largely thanks to stackoverflow.com). However, I need a way to distribute the app without any certs and leave it to the user to provide his own certificate. I thought I would just do that by instructing the user to import the certificate in iphone's profiles (settings-general-profiles), which is what you get by opening a .p12 file in Mail.app and then I would access that item in my app. I would expect that the certificates in profiles are available through the keychain API, but I guess I'm wrong on that. 1) Is there a way to access a certificate that I've already loaded in iphone's profile in my app? 2) What other options I have for loading a user specified certificate in my app? The only thing I can come up with is providing some interface where the user can give an URL to his .p12 cerificate, which I can then load into the app's keychain for later use, but thats not exactly user-friednly. I'm looking for something that would allow the user to put the cert on phone (email it to himself) and then load it in my app.

    Read the article

  • when does factory girl create objects in db?

    - by Pavel K.
    i am trying to simulate a session using factory girl/shoulda (it worked with fixtures but i am having problems with using factories). i have following factories (user login and email both have 'unique' validations): Factory.define :user do |u| u.login 'quentin' u.email '[email protected]' end Factory.define :session_user, :class => Session do |u| u.association :user, :factory => :user u.session_id 'session_user' end and here's the test class MessagesControllerTest < ActionController::TestCase context "normal user" do setup do @request.session[:user_id]=Factory(:user).id @request.session[:session_id]=Factory(:session_user).session_id end should "be able to access new message creation" do get :new assert_response :success end end end but when i run "rake test:functionals", i get this test result 1) Error: test: normal user should be able to access new message creation. (MessagesControllerTest): ActiveRecord::RecordInvalid: Validation failed: Account name already exists!, Email already exists! which means that record already exists in db when i am referring to it in test setup. is there something i don't understand here? does factory girl create all factories in db on startup? rails 2.3.5/shoulda/factory girl

    Read the article

  • How come nobody wrote RadioMenuItems class for Winforms?

    - by Pavel Radzivilovsky
    Or maybe google is just not so friendly to me? What I want is this simple thing: constructor that accepts an array of menu item objects Value get/set property that would set all the Checked properties right bind to all Clicked events of the supplied items and provide One event Working DataBind facilities If you encountered such a nice thing around, please direct me. No need for manual do-it-in-your-form1.cs-class links, please. This I can do myself.

    Read the article

  • Zend Framework - validators are not working anymore?

    - by Pavel Dubinin
    Eariler I happily used the following code for creating form elements (inside Zend_Form descendant): //Set for options $this->setOptions(array( 'elements' => array( 'title' => array( 'type' => 'text', 'options' => array( 'required' => true, 'label' => 'Title', 'filters' => array('StringTrim'), 'validators' => array( array('StringLength', false, array('minLength'=>1, 'maxLength'=>50)), ), ) ) )); But now I've noticed that validators are not working.. I suspect this might be due to zend updates.. Does anyone face this problem?

    Read the article

  • Zend Lucene - cannot search numbers

    - by Pavel Dubinin
    Using Zend Lucene I cannot search numbers in description fields Added it like this: $doc->addField(Zend_Search_Lucene_Field::Text('description', $current_item['item_short_description'], 'utf-8')); Googling for this showed that applying following code should solve the problem, but it did not..: Zend_Search_Lucene_Analysis_Analyzer::setDefault(new Zend_Search_Lucene_Analysis_Analyzer_Common_TextNum_CaseInsensitive()); any thougts?

    Read the article

  • Why "Algorithms" and "Data Structures" are treated as separate disciplines?

    - by Pavel Shved
    This question was the last straw; and I've been wondering for a long time about it, Why do people think about "Algorithms" and "Data structures" as about something that can be separated from each other? I see a lot of evidence that they're separated in programmers' minds. they request "Data Structures & Algorithms" books they refer to "Data Structures" and "Algorithms" as separate university courses they "know Algorithms", but are "weak in Data Structures" (can't find the link, sorry). etc. In my opinion "Data Structures" are algorithms, since the concept of "Data Structure" is about Algorithms to operate data that go in and out of the structures. But the opinion seems not a mainstream. What do I miss?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >