Search Results

Search found 21317 results on 853 pages for 'key mapping'.

Page 286/853 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • OWB 11gR2 - Find and Search Metadata in Designer

    - by David Allan
    Here are some tools and techniques for finding objects, specifically in the design repository. There are ways of navigating and collating objects that are useful for day to day development and build-time usage - this includes features out of the box and utilities constructed on top. There are a variety of techniques to navigate and find objects in the repository, the first 3 are out of the box, the 4th is an expert utility. Navigating by the tree, grouping by project and module - ok if you are aware of the exact module/folder that objects reside in. The structure panel is a useful way of finding parts of an object, especially when large rather than using the canvas. In large scale projects it helps to have accelerators (either find or collections below). Advanced find to search by name - 11gR2 included a find capability specifically for large scale projects. There were improvements in both the tree search and the object editors (including highlighting in mapping for example). So you can now do regular expression based search and quickly navigate to objects within a repository. Collections - logically organize your objects into virtual folders by shortcutting the actual objects. This is useful for a range of things since all the OWB services operate on collections too (export/import, validation, deployment). See the post here for new collection functionality in 11gR2. Reports for searching by type, updated on, updated by etc. Useful for activities such as periodic incremental actions (deploy all mappings changed in the past week). The report style view is useful since I can quickly see who changed what and when. You can see all the audit details for objects within each objects property inspector, but its useful to just get all objects changed today or example, all objects changed since my last build etc. This utility combines both UI extensions via experts and the public views on the repository. In the figure to the right you see the contextual option 'Object Search' which invokes the utility, you can see I have quite a number of modules within my project. Figure out all the potential objects which have been changed is not simple. The utility is an expert which provides this kind of search capability. The utility provides a report of the objects in the design repository which satisfy some filter criteria. The type of criteria includes; objects updated in the last n days optionally filter the objects updated by user filter the user by project and by type (table/mappings etc.) The search dialog appears with these options, you can multi-select the object types, so for example you can select TABLE and MAPPING. Its also possible to search across projects if need be. If you have multiple users using the repository you can define the OWB user name in the 'Updated by' property to restrict the report to just that user also. Finally there is a search name that will be used for some of the options such as building a collection - this name is used for the collection to be built. In the example I have done, I've just searched my project for all process flows and mappings that users have updated in the last 7 days. The results of the query are returned in a table containing the object names, types, full path and audit details. The columns are sort-able, you can sort the results by name, type, path etc. One of the cool things here, is that you can then perform operations on these objects - such as edit them, export single selection or entire results to MDL, create a collection from the results (now you have a saved set of references in the repository, you could do deploy/export etc.), create a deployment script from the results...or even add in your own ideas! You see from this that you can do bulk operations on sets of objects based on search results. So for example selecting the 'Build Collection' option creates a collection with all of the objects from my search, you can subsequently deploy/generate/maintain this collection of objects. Under the hood of the expert if just basic OMB commands from the product and the use of the public views on the design repository. You can see how easy it is to build up macro-like capabilities that will help you do day-to-day as well as build like tasks on sets of objects.

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • OpenBSD configuration: Client unable to automount via NFS using amd

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

  • Users loggin to 3Com switches authenticated by radius not getting admin priv and no access available

    - by 3D1L
    Hi, Following the setup that I have for my Cisco devices, I got some basic level of functionality authenticating users that loggin to 3Com switches authenticated against a RADIUS server. Problem is that I can not get the user to obtain admin privileges. I'm using Microsoft's IAS service. According to 3Com documentation when configuring the access policy on IAS the value of 010600000003 have to be used to specify admin access level. That value have to be input in the Dial-in profile section: 010600000003 - indicates admin privileges 010600000002 - manager 010600000001 - monitor 010600000000 - visitor Here is the configuration on the switch: radius scheme system server-type standard primary authentication XXX.XXX.XXX.XXX accounting optional key authentication XXXXXX key accounting XXXXXX domain system scheme radius-scheme system local-user admin service-type ssh telnet terminal level 3 local-user manager service-type ssh telnet terminal level 2 local-user monitor service-type ssh telnet terminal level 1 The configuration is working with the IAS server because I can check user login events with the Eventviewer tool. Here is the output of the DISPLAY RADIUS command at the switch: [4500]disp radius SchemeName =system Index=0 Type=standard Primary Auth IP =XXX.XXX.XXX.XXX Port=1645 State=active Primary Acct IP =127.0.0.1 Port=1646 State=active Second Auth IP =0.0.0.0 Port=1812 State=block Second Acct IP =0.0.0.0 Port=1813 State=block Auth Server Encryption Key= XXXXXX Acct Server Encryption Key= XXXXXX Accounting method = optional TimeOutValue(in second)=3 RetryTimes=3 RealtimeACCT(in minute)=12 Permitted send realtime PKT failed counts =5 Retry sending times of noresponse acct-stop-PKT =500 Quiet-interval(min) =5 Username format =without-domain Data flow unit =Byte Packet unit =1 Total 1 RADIUS scheme(s). 1 listed Here is the output of the DISPLAY DOMAIN and DISPLAY CONNECTION commands after users log into the switch: [4500]display domain 0 Domain = system State = Active RADIUS Scheme = system Access-limit = Disable Domain User Template: Idle-cut = Disable Self-service = Disable Messenger Time = Disable Default Domain Name: system Total 1 domain(s).1 listed. [4500]display connection Index=0 ,Username=admin@system IP=0.0.0.0 Index=2 ,Username=user@system IP=xxx.xxx.xxx.xxx On Unit 1:Total 2 connections matched, 2 listed. Total 2 connections matched, 2 listed. [4500] Here is the DISP RADIUS STATISTICS: [4500] %Apr 2 00:23:39:957 2000 4500 SHELL/5/LOGIN:- 1 - ecajigas(xxx.xxx.xxx.xxx) in un it1 logindisp radius stat state statistic(total=1048): DEAD=1046 AuthProc=0 AuthSucc=0 AcctStart=0 RLTSend=0 RLTWait=2 AcctStop=0 OnLine=2 Stop=0 StateErr=0 Received and Sent packets statistic: Unit 1........................................ Sent PKT total :4 Received PKT total:1 Resend Times Resend total 1 1 2 1 Total 2 RADIUS received packets statistic: Code= 2,Num=1 ,Err=0 Code= 3,Num=0 ,Err=0 Code= 5,Num=0 ,Err=0 Code=11,Num=0 ,Err=0 Running statistic: RADIUS received messages statistic: Normal auth request , Num=1 , Err=0 , Succ=1 EAP auth request , Num=0 , Err=0 , Succ=0 Account request , Num=1 , Err=0 , Succ=1 Account off request , Num=0 , Err=0 , Succ=0 PKT auth timeout , Num=0 , Err=0 , Succ=0 PKT acct_timeout , Num=3 , Err=1 , Succ=2 Realtime Account timer , Num=0 , Err=0 , Succ=0 PKT response , Num=1 , Err=0 , Succ=1 EAP reauth_request , Num=0 , Err=0 , Succ=0 PORTAL access , Num=0 , Err=0 , Succ=0 Update ack , Num=0 , Err=0 , Succ=0 PORTAL access ack , Num=0 , Err=0 , Succ=0 Session ctrl pkt , Num=0 , Err=0 , Succ=0 RADIUS sent messages statistic: Auth accept , Num=0 Auth reject , Num=0 EAP auth replying , Num=0 Account success , Num=0 Account failure , Num=0 Cut req , Num=0 RecError_MSG_sum:0 SndMSG_Fail_sum :0 Timer_Err :0 Alloc_Mem_Err :0 State Mismatch :0 Other_Error :0 No-response-acct-stop packet =0 Discarded No-response-acct-stop packet for buffer overflow =0 The other problem is that when the RADIUS server is not available I can not log in to the switch. The switch have 3 local accounts but none of them works. How can I specify the switch to use the local accounts in case that the RADIUS service is not available?

    Read the article

  • Converting from mp4 to Xvid avi using avconv?

    - by Ricardo Gladwell
    I normally use avidemux to convert mp4s to Xvid AVI for my Philips Streamium SLM5500. Normally I select MPEG-4 ASP (Xvid) at Two Pass with an average bitrate f 1500kb/s for video and AC3 (lav) audio and it converts correctly. However, I'm trying to using avconv so I can automate the process with a script, but when I do this the video stutters and stops playing part way through. I have a suspicion its something to do with a faulty audio conversion. The commands I'm using are as follows: avconv -y -i video.mp4 -pass 1 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi /dev/null avconv -y -i video.mp4 -pass 2 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi video.avi There is a bewildering array of arguments for avconv. Is there something I'm doing wrong? Is there a way I can script avidemux from a headless server? Please see command line output: $ avconv -y -i video.mp4 -pass 1 -vtag xvid -an -b:v 1500k -f avi /dev/null avconv version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:49:20 with gcc 4.7.2 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 Duration: 00:44:09.16, start: 0.000000, bitrate: 669 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 720x404 [PAR 1:1 DAR 180:101], 538 kb/s, 25 fps, 25 tbr, 100 tbn, 50 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2013-02-04 13:53:42 [buffer @ 0x7f4c40] w:720 h:404 pixfmt:yuv420p Output #0, avi, to '/dev/null': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 ISFT : Lavf53.21.1 Stream #0.0(und): Video: mpeg4, yuv420p, 720x404 [PAR 1:1 DAR 180:101], q=2-31, pass 1, 1500 kb/s, 25 tbn, 25 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream mapping: Stream #0:0 -> #0:0 (h264 -> mpeg4) Press ctrl-c to stop encoding frame=66227 fps=328 q=2.0 Lsize= 0kB time=2649.16 bitrate= 0.0kbits/s video:401602kB audio:0kB global headers:0kB muxing overhead -100.000000% $ avconv -y -i video.mp4 -pass 2 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi video.avi avconv version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:49:20 with gcc 4.7.2 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 Duration: 00:44:09.16, start: 0.000000, bitrate: 669 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 720x404 [PAR 1:1 DAR 180:101], 538 kb/s, 25 fps, 25 tbr, 100 tbn, 50 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2013-02-04 13:53:42 [buffer @ 0x12b4f00] w:720 h:404 pixfmt:yuv420p Incompatible sample format 's16' for codec 'ac3', auto-selecting format 'flt' [mpeg4 @ 0x12b3ec0] [lavc rc] Using all of requested bitrate is not necessary for this video with these parameters. Output #0, avi, to 'video.avi': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 ISFT : Lavf53.21.1 Stream #0.0(und): Video: mpeg4, yuv420p, 720x404 [PAR 1:1 DAR 180:101], q=2-31, pass 2, 1500 kb/s, 25 tbn, 25 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, flt, 128 kb/s Metadata: creation_time : 2013-02-04 13:53:42 Stream mapping: Stream #0:0 -> #0:0 (h264 -> mpeg4) Stream #0:1 -> #0:1 (ac3 -> ac3) Press ctrl-c to stop encoding Input stream #0:1 frame changed from rate:44100 fmt:s16 ch:2 to rate:44100 fmt:flt ch:2 frame=66227 fps=284 q=2.2 Lsize= 458486kB time=2649.13 bitrate=1417.8kbits/s video:413716kB audio:41393kB global headers:0kB muxing overhead 0.741969%

    Read the article

  • RC of Entity Framework 4.1 (which includes EF Code First)

    - by ScottGu
    Last week the data team shipped the Release Candidate of Entity Framework 4.1.  You can learn more about it and download it here. EF 4.1 includes the new “EF Code First” option that I’ve blogged about several times in the past.  EF Code First provides a really elegant and clean way to work with data, and enables you to do so without requiring a designer or XML mapping file.  Below are links to some tutorials I’ve written in the past about it: Code First Development with Entity Framework 4.x EF Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database The above tutorials were written against the CTP4 release of EF Code First (and so some APIs might be a little different) – but the concepts and scenarios outlined in them are the same as with the RC. Go Live License Last week’s EF 4.1 RC ships with a “go live” license that enables you to use it in production environments.  The final release of EF 4.1 will ship within the next 4 weeks and will be 100% API compatible with the RC release. Improvements with the RC The RC includes several improvements and enhancements.  The EF team has a good blog post summarizing the RC changes.  Scott Hanselman also has a nice video interview with the data team that talks more about the release. One of my favorite improvements introduced with last week’s RC is its support for medium trust security.  This enables you to use EF 4.1 (and code-first) within low-cost ASP.NET shared hosting web environments – without requiring a hoster to install anything to use it. EF 4.1 also now supports validation with not only code-first scenarios, but also model-first and database-first workflows.  Upgrading from previous releases The RC does include a few API tweaks and changes from the prior CTP builds.  Read the release notes that come with the release to get a more detailed listing of the changes. John Papa also has an excellent Upgrading to EF 4.1 RC blog post that describes the steps he took when upgrading a large project he wrote with the previous CTP5 release.  The work to upgrade is pretty straight forward and easy – use his write-up as a guide on how to quickly update projects of your own. NuGet Package Rename One of the changes that the data team made between the CTP5 and RC releases was to rename the NuGet package name from “EFCodeFirst” to “EntityFramework”. They decided to make this change since the EF 4.1 release now includes several additions above and beyond just code first. If you already have installed the “EFCodeFirst” NuGet package, you’ll want to uninstall it and then install the new “EntityFramework” NuGet package.  John Papa’s blog post details the exact steps on how to do this (it only takes ~20 seconds to do this). More EF Tutorials Julie Lerman has created some nice whitepapers and tutorials for MSDN that show using the new EF4 and EF 4.1 feature set. Click here to find links to read and watch them. Summary I’m really excited about the EF 4.1 release that will be shipping next month.  It significantly improves the Entity Framework, and makes it even easier and cleaner to work with data inside of .NET.  You can take advantage of it within all ASP.NET projects (including both Web Forms and MVC), within client projects using Windows Forms and WPF, and within other project types like WCF, Console and Services.  You can use NuGet to easily install it within all of them. Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Wednesday, April 11, 2012

    CodePlex Daily Summary for Wednesday, April 11, 2012Popular ReleasesCommand-Line Database Builder: 1.0.2012.0411: Utility now supports arbitrary key:value pairs on the command-line for performing replacements in the .pp.sql files. Removed the usage of '-' to prefix key:value arguments. AspNetAssemblyPath is no longer a known key:value pair but can still be used because the tool now supports arbitrary key:value pairs for replacements. This was provided previously to support setting up ASP.NET Membership and Roles in a database. I've added a .pp.sql file to the Examples archive that demonstrates this usage.Supporting Guidance and Whitepapers: v1 - Team Foundation Service Whitepapers: Welcome to the BETA release of the Team Foundation Service Whitepapers preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review All critical bugs have been resolved Known Issue...Scrum Task Board Card Creator: TaskCardCreator 3.2.0.0: What's New: New report template added: Microsoft Visual Studio Scrum 1.0 Detailed Report Supported Templates: Microsoft Visual Studio Scrum 1.0 MSF for Agile Software Development v5.0Microsoft .NET Gadgeteer: .NET Gadgeteer Core 2.42.550 (BETA): Microsoft .NET Gadgeteer Core RELEASE NOTES Version 2.42.550 11 April 2012 BETA VERSION WARNING: This is a beta version! Please note: - API changes may be made before the next version (2.42.600) - The designer will not show modules/mainboards for NETMF 4.2 until you get upgraded libraries from the module/mainboard vendors - Install NETMF 4.2 (see link below) to use the new features of this release That warning aside, this version should continue to sup...DISM GUI: DISM GUI 3.1.1: Fixes - Fixed a bug in the Delete Driver function - The Index field is not auto populated with the number 1LINQ to Twitter: LINQ to Twitter Beta v2.0.24: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, and Client Profile. 100% Twitter API coverage. Also available via NuGet.Kendo UI ASP.NET Sample Applications: Sample Applications (2012-04-11): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.Json.NET: Json.NET 4.5 Release 2: New feature - Added support for the SerializableAttribute and serializing a type's internal fields New feature - Added MaxDepth to JsonReader/JsonSerializer/JsonSerializerSettings New feature - Added support for ignoring properties with the NonSerializableAttribute Fix - Fixed deserializing a null string throwing a NullReferenceException Fix - Fixed JsonTextReader reading from a slow stream Fix - Fixed CultureInfo not being overridden on JsonSerializerProxy Fix - Fixed full trust ...SCCM Client Actions Tool: SCCM Client Actions Tool v1.12: SCCM Client Actions Tool v1.12 is the latest version. It comes with following changes since last version: Improved WMI date conversion to be aware of timezone differences and DST. Fixed new version check. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively availab...Dual Browsing: Dual Browser: Please note the following: I setup the address bar temporarily to only accepts http:// .com addresses. Just type in the name of the website excluding: http://, www., and .com; (Ex: for www.youtube.com just type: youtube then click OK). The page splitter can be grabbed by holding down your left mouse button and move left or right. By right clicking on the page background, you can choose to refresh, go back a page and so on. Demo video: http://youtu.be/L7NTFVM3JUYMultiwfn: Multiwfn 2.3.3: Multiwfn 2.3.3Liberty: v3.2.0.1 Release 9th April 2012: Change Log-Fixed -Reach Fixed a bug where the object editor did not work on non-English operating systemsPath Copy Copy: 10.1: This release addresses the following work items: 11357 11358 11359 This release is a recommended upgrade, especially for users who didn't install the 10.0.1 version.ExtAspNet: ExtAspNet v3.1.3: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-08 v3.1.3 -??Language="zh_TW"?JS???BUG(??)。 +?D...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.5.5: New Controls ChatBubble ChatBubbleTextBox OpacityToggleButton New Stuff TimeSpan languages added: RU, SK, CS Expose the physics math from TimeSpanPicker Image Stretch now on buttons Bug Fixes Layout fix so RoundToggleButton and RoundButton are exactly the same Fix for ColorPicker when set via code behind ToastPrompt bug fix with OnNavigatedTo Toast now adjusts its layout if the SIP is up Fixed some issues with Expression Blend supportHarness - Internet Explorer Automation: Harness 2.0.3: support the operation fo frameset, frame and iframe Add commands SwitchFrame GetUrl GoBack GoForward Refresh SetTimeout GetTimeout Rename commands GetActiveWindow to GetActiveBrowser SetActiveWindow to SetActiveBrowser FindWindowAll to FindBrowser NewWindow to NewBrowser GetMajorVersion to GetVersionBetter Explorer: Better Explorer 2.0.0.861 Alpha: - fixed new folder button operation not work well in some situations - removed some unnecessary code like subclassing that is not needed anymore - Added option to make Better Exlorer default (at least for WIN+E operations) - Added option to enable file operation replacements (like Terracopy) to work with Better Explorer - Added some basic usability to "Share" button - Other fixesLightFarsiDictionary - ??????? ??? ?????/???????: LightFarsiDictionary - v1: LightFarsiDictionary - v1WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.3: Version: 2.5.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete [O] WAF: Mark the StringBuilderExtensions class as obsolete because the AppendInNewLine method can be replaced with string.Jo...ClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceNew Projects0x10c Tools: Tools for the 0x10c-CPU: Assembler, emulator and (maybe in the future) a small compiler. Just for fun and exercise.AzureWiki: AzureWiki is the Wiki developed using Windows Azure platform which would be similar to dotnetwikiCommand-Line Database Builder: A command-line tool for interacting with a DBMS command-line interface (e.g., sqlcmd.exe) to execute a sequential list of SQL scripts against the DBMS. Tool allows for expression replacement in the SQL scripts during execution.copydata: The CopyData command-line utility enables you to easily transfer sets of data from an Oracle or SQL server data source directly to a target SQL Server database. It is developed in C#.DinoDoc: The little friendly batch-upload tool designed for SharePoint Server and Windows SharePoint Services, enabling you to easily upload multiple files and folders with a single click! For more information about DinoDoc and about SharePoint development: http://spdino.wordpress.comDiscovery House: This is a project demonstrating a green home.DocShare: DocShare illustrates the CQRS pattern on Windows Azure and also uses MVC4 Web API. DocShare uses two web roles, one for queries (reads) and one for command (writes). Each has a UI and a Web API service.EnderTecLauncher: EnderTecLauncherEntityFilter: This library provides a way to store filtering metadata, and reassemble it into dynamic lambda expressions. It allows for groups of filters to be created. Two implementations of IFilterRepository are in development:Database and XML. It's developed in C# for EntityFramework 4.1 and above.Epi Info™ - Web Analysis & Visualization: Epi Info™ is a public domain suite of software tools designed for the global community of public health practitioners and researchers. It provides for easy data entry form and database construction, a customized data entry experience, and data analyses with epidemiologic statistic Epi Info™ Web Analytics & Visualization is an open source project of the popular Epi Info™ suite of tools. The web product can be deployed as an intranet application and will provide analytical and visualization ...fOrganiz: This application allows you to automatically organize by date in specific subdirectories your picturesforwork: forworkGeneric Language - Mobile & Telephony Technologies: Genlang Mobile and Telephony Technologies, a complete application development platform for all platforms, Windows Mobile, Windows Desktop, Web, Apple, Android, BlackBerry.gindex: Graph has become increasingly important in modelling complicated structures and schemaless data such as proteins,chemical compounds, and XML documents. Given a graph query, it is desirable to retrieve graphs quickly from a large database via graph-based indices.Hijri Date SkinObject: Hijri date skin object for dotnetnuke copy to admin/skins use it in your skin file HorseRaces: Exercise inspired on example found in book "Designing for scalability with Microsoft Windows DNA" by Sten Sundblad and Per SundbladHotelMS: HotelManageSystemhtml5lmth: testjRulee: The jRulee javascript toolkit libraryKrishaTool: oloLegSec: LegSec is an small command line application for collating licence information based on that provided in Nuget packages. Modwind Domain Info: The program determines country of origin for top-level domains and purpose for international ones.MySCM Outlook Addin: This is another tool for SCM/TFS team. Use this add-in to create, update, refresh TFS work items from your Outlook emails. Not a substitution, but this little tool can help you to track your various work in TFS while educating and establishing the processes and policies.neptouni: This software can be used to convert nepali ttf text to the unicode characters.Northwind SSDT: An SSDT project for the Northwind database. This will enable you to deploy Northwind wherever you like. Note that to allow for hosting in a SQL Azure database that is used to host objects for other applications all the Northwind objects have been moved into a schema called [Northwind]Optional: Optional is a library to create options and commands from command-line arguments. It uses Convention over Configuration to get out of your way. Attributes can be used to set properties which differ from the convention.pbdevnpro1: pbdevnpro1,no1Projet LIF7 Snake: Projet LIF7 SnakePurpleStoat: A modular, extensible Silverlight application shell using Prism, Unity and the Enterprise Library, and written in C#. It includes WCF services which provide AuthZ and logging services to the shell, which are also available to the modules.Sharepoint 2010 Weather WebPart using Azure Data Market Met Office Feed: Sharepoint 2010 WebPart that displays a 5 day weather forecast for a given location. The weather data is retrieved from the Met Office feed hosted on the Windows Azure Data Market. This is a free data feed that provides weather data for the UK only.Silverlight Layouts: Silverlight Layouts is a project for controls that behave as content placeholders with pre-defined GUI layout for some of common scenarios: - frozen headers, - frozen columns, - cyrcle layouts etc.Snom Phone .NET Library: .NET Automation library for the snom IP phones. Provides simple class library to interact with you snom phone: - Press any key on the phone. - Dial numbers. - Answer or hang up call. - Mute and un-mute. - Hold and un-hold a call. - Navigate through a routing phone system using dial tone. - Get events on incoming or outgoing calls, as well as other events. - And more...Substrate Windows 8 XAML Framework: Framework for writing Windows 8 applications in XAMLTiger Converters: Tiger is a small languaje based on expressions, so it's perfect for writing the body of a WPF/SL converter.Time manager by bozheville: Time manager by bozhevilleUmbraco 501 on Windows Azure (with Dynamic Deploy): This project is configured to run Umbraco 5.0.1 on Windows Azure via the Dynamic Deploy platform. For more information on Dynamic Deploy visit http://www.dynamicdeploy.com Dynamic Deploy is a cloud deployment platform from where you can deploy applications directly to cloud platforms (like Windows Azure). UnitPrice: This is unit priceWebmedia: this is my webmedia projectWindows 8 Metro RSS Reader: A RSS Reader metro app for Windows 8 written in C# and XAML based on the sample Grid templateWindows Phone UPnP: The basics of a UPnP network stack for Windows Phone, based on a blog post originally. Written in C#, also requires the Async CTP. Includes device discovery via SSDP and method invocation.WinRT XAML Toolkit: A set of controls, extensions and helper classes for Windows Runtime XAML applicationsWmiGuru: WmiGuru is a lightweight F# library for WMI operations such as getting instances, creating instance, and querying associated instances.????: ???? ??.net mvc3??。??jquery+html5????。?????: openwebsite

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Play audio file data - Spring MVC

    - by Vijay Veeraraghavan
    In my web-application, I have various audio clips uploaded by the users in the database stored in the BLOB column. The audio files are low bit rate WAV files. The clips are secured, one can see only those clips he has uploaded. Instead of user downloading the clip and playing it in his player, I need it be steamed online in the web page itself. In the jsp I use the <audio> tag with the source mapping to the controller mappping url. <td> <audio controls><source src="recfile/${au.id}" type="audio/mpeg" /></audio> </td> Where, the recfile is the request mapping and the au.id is the audio id. In the controller I process the request like below @RequestMapping(value = "/recfile/{id}", method = RequestMethod.GET, produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE }) public HttpEntity<byte[]> downloadRecipientFile(@PathVariable("id") int id, ModelMap model, HttpServletResponse response) throws IOException, ServletException { LOGGER.debug("[GroupListController downloadRecipientFile]"); VoiceAudioLibrary dGroup = audioClipService.findAudioClip(id); if (dGroup == null || dGroup.getAudioData() == null || dGroup.getAudioData().length <= 0) { throw new ServletException("No clip found/clip has not data, id=" + id); } HttpHeaders header = new HttpHeaders(); I tried this too //header.setContentType(new MediaType("audio", "mp3")); header.setContentType(new MediaType("audio", "vnd.wave"); header.setContentLength(dGroup.getAudioData().length); return new HttpEntity<byte[]>(dGroup.getAudioData(), header); } When the jsp loads, the controller get the request, it serves back the audio data fetched from the database, the jsp too shows the player with the controls. But when I play it nothing happens. Why is it? Am I missing anything in the configuration? Am I doing it right?

    Read the article

  • Axis2 embedded in my web app is not working

    - by Shamik
    OKay I lost alsmost the whole day on this. I have a webapp where I would like to add AXIS2 and start working. I added AxisServlets in the web.xml file like - <servlet> <servlet-name>AxisServlet</servlet-name> <display-name>Apache-Axis Servlet</display-name> <servlet-class>org.apache.axis2.transport.http.AxisServlet</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>AxisServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> I also added Services.xml file like <service name="ReportViewerService"> <description> This is a sample Web Service for illustrating Attachments API of Axis2 </description> <parameter name="ServiceClass">myclass</parameter> <operation name="getReport"> <actionMapping>urn:getReport</actionMapping> <messageReceiver class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/> </operation> </service> The directory structure is as mentioned here WEB-ING | - conf | |- axis2.xml |-lib | |- all libs |-services |-ReportViewerService | - META-INF |-services.xml |- web.xml The problem is - after all of these, the service endpoint will not come, I can not see the WSDL file http://localhost:8080/BOReportingServer/services/ReportViewerService?wsdl -- this gives an exception like - Throwable occurred: javax.servlet.ServletException: File &quot;/axis2-web/listSingleService.jsp&quot; not found

    Read the article

  • Java+DOM: How do I convert a DOM tree without namespaces to a namespace-aware DOM tree?

    - by java.is.for.desktop
    Hello, everyone! I receive a Document (DOM tree) from a certain API (not in JDK). Sadly, this Document is not namespace-aware. As far as I know DOM, once generated, namespace-awareness can't be "added" afterwards. When converting this Document using a Transformer to a string, the XML is correct. Elements have xmlns:... attributes and name prefixes. But from the DOM point of view, there are no namespaces and no prefixes. I need to be able to convert this Document into a new Document which is namespace-aware. Yes, I could do this by just converting it to a string and back to DOM with namespaces enabled. But: nodes of the original tree have user-objects set. Converting to string and back would make a mapping of these user-objects to the new Document very complicated, if not impossible. So I really need a way to convert non-namespace DOM to namespace DOM. Are there any more-or-less straightforward solutions for this? Worst case (what I'm hoping to avoid) would be to manually iterate through old Document tree and create new namespace-aware Node for each old Node. Doing so, one had to manually "parse" namespace prefixes, watch out for xmlns-attributes, and maintain a mapping between prefixes and namespace-URIs. Lots of things to go wrong.

    Read the article

  • pylucene: install error

    - by Pradeep
    I am trying to install Pylucene (pylucene-3.3-3-src.tar.gz) on my ubuntu linux 11.10. I have python 2.7.2. I was able to compile JCC (I think) because I didnt see any error when I installed it. When I tried to install Pylucene I get the following error. Can someone help? Thanks. ICU not installed /usr/bin/python -m jcc --shared --jar lucene-java-3.3/lucene/build/lucene-core-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/analyzers/common/lucene-analyzers-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/memory/lucene-memory-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/highlighter/lucene-highlighter-3.3.jar --jar build/jar/extensions.jar --jar lucene-java-3.3/lucene/build/contrib/queries/lucene-queries-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/grouping/lucene-grouping-3.3.jar --package java.lang java.lang.System java.lang.Runtime --package java.util java.util.Arrays java.util.HashMap java.util.HashSet java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator --package java.util.regex --package java.io java.io.StringReader java.io.InputStreamReader java.io.FileInputStream --exclude org.apache.lucene.queryParser.Token --exclude org.apache.lucene.queryParser.TokenMgrError --exclude org.apache.lucene.queryParser.QueryParserTokenManager --exclude org.apache.lucene.queryParser.ParseException --exclude org.apache.lucene.search.regex.JakartaRegexpCapabilities --exclude org.apache.regexp.RegexpTunnel --exclude org.apache.lucene.analysis.cn.smart.AnalyzerProfile --python lucene --mapping org.apache.lucene.document.Document 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping java.util.Properties 'getProperty:(Ljava/lang/String;)Ljava/lang/String;' --sequence java.util.AbstractList 'size:()I' 'get:(I)Ljava/lang/Object;' --rename org.apache.lucene.search.highlight.SpanScorer=HighlighterSpanScorer --version 3.3 --module python/collections.py --module python/ICUNormalizer2Filter.py --module python/ICUFoldingFilter.py --module python/ICUTransformFilter.py --files 3 --build /usr/bin/python: No module named jcc make: *** [compile] Error 1 Here is my Makefile configuration which I uncommented PREFIX_PYTHON=/usr ANT=ant PYTHON=$(PREFIX_PYTHON)/bin/python JCC=$(PYTHON) -m jcc --shared NUM_FILES=3

    Read the article

  • How do I use Fluent NHibernate with .NET 4.0?

    - by Tomas Lycken
    I want to learn to use Fluent NHibernate, and I'm working in VS2010 Beta2, compiling against .NET 4, but I'm experiencing some problems. Summary My main problem (at the moment) is that the namespace FluentNHibernate isn't available even though I've imported all the .dll assemblies mentioned in this guide. This is what I've done: 1. I downloaded the Fluent NHibernate source from here, extracted the .zip and opened the solution in VS. A dialog asked me if I wanted to convert the solution to a VS2010 solution, so I did. 2. I then went into each project's properties and configured all of them to compile for .NET 4, and built the entire solution. 3. I copied all the .dll files from /bin/Debug/ in the FluentNHibernate to a new folder on my local hard drive. 4. In my example project, I referenced FluentNHibernate.dll and NHibernate.dll from the new folder. This is my problem: If I right-click on FluentNHibernate in the References list and select "View in Object Browser...", it shows up correctly. Now, when I try to create a mapping class, I can't import FluentNHibernate. This code: using FluentNHibernate.Mapping; namespace FluentNHExample.Mappings { } generates an error on the using statement, saying The type or namespace 'FluentNHibernate' could not be found (are you missing a using directive or an assembly reference?). The FluentNHibernate assembly is still in the list of References of my project, but if I try to browse the assembly in Object Browser again, it can't be found. What is causing this?

    Read the article

  • Interpolation and Morphing of an image in labview and/or openCV

    - by Marc
    I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map proj_x, proj_y <-- cam_x, cam_y for scattered point pairs My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows [pgridx, pgridy] = meshgrid(allprojxpts, allprojypts) fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy); fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy); and the reverse for the camera to projector mapping Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle) I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick. So my question is one of the following 1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ? 2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV. Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"

    Read the article

  • How to let Tomcat publish WSDL for the WS it provides (CXF 2.2, Spring 3, Tomcat6)

    - by Zwei Steinen
    Hi, I am trying to implement a simple web service provider using Tomcat6, CXF 2.2, Spring 3, and actually the service itself runs fine (I can call web methods using the original WSDL and SoapUI). However, Tomcat returns a blank page on "?wsdl" requests. Also, when I try to manipulate the (would-be) published WSDL by adding a publishedEndpointURL property to the jaxws:endpoint element, Tomcat will issue a XML parse exception (something like property publishedEndpointURL is not allowed in element jaxws:endpoint) <jaxws:endpoint id="service" implementor="org.sample.ServiceImpl" implementorClass="org.sample.ServiceImpl" address="/service" publishedEndpointURL="http://localhost:8080/MyService/service"> I used "contract first" style. EDIT: What I did so far: Setup tomcat6 with Spring3 Generate CXF implementation class by using maven Provide web.xml (only relevant part shown) <listener> <listener-class>org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <servlet> <servlet-name>cxf</servlet-name> <servlet-class> org.apache.cxf.transport.servlet.CXFServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>cxf</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> Provide applicationContext.xml (only relevant part is shown) Package generated stuff into war and deploy

    Read the article

  • log4j:WARN No appenders could be found for logger in web.xml

    - by cometta
    I already put the log4jConfigLocation in web.xml, but still, i get warning log4j:WARN No appenders could be found for logger (org.springframework.web.context.ContextLoader). log4j:WARN Please initialize the log4j system properly. what did i missed out? <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml </param-value> </context-param> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/classes/log4j.properties</param-value> </context-param> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>suara2</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>suara2</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping>

    Read the article

  • Transferring data from Salesforce using Apex Data Loader to Oracle

    - by Barret
    While attempting to transfer data from Salesforce using Apex Data Loader to Oracle Keep getting the following error: 26937 [databaseAccountExtract] FATAL com.salesforce.dataloader.dao.database.Data baseContext - Error getting value for SQL parameter: nkey__c. Please make sure that the value exists in the configuration file or is passed in. Database conf iguration: insertAccount. The database-conf.xml has the following beans: <bean id="insertAccount" class="com.salesforce.dataloader.dao.database.DatabaseConfig" singleton="true"> <property name="sqlConfig" ref="insertAccountSql"/> <property name="dataSource" ref="dbDataSource"/> </bean> <bean id="insertAccountSql" class="com.salesforce.dataloader.dao.database.SqlConfig" singleton="true"> <property name="sqlString"> <value> INSERT INTO VANTROPO.SF_ACCOUNTCHANNEL (nkey__c) VALUES (@nkey__c@) </value> </property> <property name="sqlParams"> <map> <entry key="nkey__c" value="java.lang.String"/> </map> </property> </bean> The SDL (mapping file) has the following values: # Account Insert Mapping values for query from Salesforce (left) and insert/update to Oracle (right) # SalesforceFieldName=OracleFieldName nkey__c=NKEY__C Any help appreciated.

    Read the article

  • jax-ws on glassfish3 init method

    - by Alex
    Hi all, I've created simple jax-ws (anotated Java 6 class to web service) service and deploied it on glassfish v3. The web.xml <?xml version="1.0" encoding="ISO-8859-1"?> <web-app> <servlet> <servlet-name>MyServiceName</servlet-name> <description>Blablabla</description> <servlet-class>com.foo-bar.somepackage.TheService</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>MyServiceName</servlet-name> <url-pattern>/MyServiceName</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> </web-app> There is no sun-jaxws.xml in the war. The service works fine but I have 2 issues: I'm using apache common configuration package to read my configuration, so i have init function that calls configuration stuff. 1. How can I configure init method for jaxws service (like i can do for the servlets for example) 2. the load on startup parameter is not affecting the service, I see that for every request init function called again (and c-tor). How can I set scope for my service? Thanks a lot,

    Read the article

  • JSON-RPC and Json-rpc service discovery specifications

    - by Artyom
    Hello, I'm going to implement JSON-PRC web service. I need specifications for this. So far I had found only one resource that can be called as real specifications: JSON-RPC 1.0 http://json-rpc.org/wiki/specification Proposal of JSON-PRC 2.0: http://groups.google.com/group/json-rpc/web/json-rpc-2-0 (why is it on google groups?) However I've seen that JavaScript frameworks like Dojo actively use JSON-RPC SMD Service Mapping Description proposal But it requires JSON Schema specifications, but it redirects to incorrect URL as reference. So far I had found following: http://tools.ietf.org/html/draft-zyp-json-schema-02 And it is still draft... Can anybody point me to some actual specifications... At least something official updated? Because it looks like that implementing JSON-RPC 1.0 as is may be not enough, at least for frameworks like Dojo. Or am I wrong? Questions: Would implementation of JSON-RPC 1.0 specifications be enough to provide JSON-RPC service for most of modern clients, and how many clients there (if at-all) that actually support beyond JSON-RPC 1.0 capabilities (SMD, Schema, 2.0)? Because it looks like that JSON-RPC 1.0 is only one that has official specifications (and not draft) If I should implement SMD, or it is recommended can somebody point to official, most recent specifications of Json Schema and Service Mapping Description or links I found are really "the specifications?" Are JSON-RPC 2.0, SMD and JSON-Schema drafts stable enough to implement them? Note: do not suggest existing JSON-RPC service implementations. Anybody?

    Read the article

  • Full JSON-RPC specifications

    - by Artyom
    Hello, I'm going to implement JSON-PRC web service. I need specifications for this. So far I had found only one resource that can be called as real specifications: JSON-RPC 1.0 http://json-rpc.org/wiki/specification Proposal of JSON-PRC 2.0: http://groups.google.com/group/json-rpc/web/json-rpc-2-0 (why is it on google groups?) However I've seen that JavaScript frameworks like Dojo actively use JSON-RPC SMD Service Mapping Description proposal But it requires JSON Schema specifications, but it redirects to incorrect URL as reference. So far I had found following: http://tools.ietf.org/html/draft-zyp-json-schema-02 And it is still draft... Can anybody point me to spome actual specifications... At least something official updated? Because it looks like that implementing JSON-RPC 1.0 and 2.0 would not be enought, at least for frameworks like Dojo. Or am I wrong? Questions: Is it enough to implement JSON-RPC 1.0 specifications and 2.0 draft to be on safe side, would this work for most JSON-RPC clients? If I should implement SMD, or it is recommended can somebody point to official specifications of Json Schema and Service Mapping Description or links I found are really "specifications?" Note: do not suggest existing JSON-RPC service implementations.

    Read the article

  • Nhibernate Guid with PK MySQL

    - by Andrew Kalashnikov
    Hello colleagues. I've got a question. I use NHibernate with MySql. At my entities I use Id(PK) for my business-logic usage and Guid(for replication). So my BaseDomain: public class BaseDomain { public virtual int Id { get; set; } public virtual Guid Guid { get; set; } public class Properties { public const string Id = "Id"; public const string Guid = "Guid"; } public BaseDomain() { } } My usage domain: public class ActivityCategory : BaseDomain { public ActivityCategory() { } public virtual string Name { get; set; } public new class Properties { public const string Id = "Id"; public const string Guid = "Guid"; public const string Name = "Name"; private Properties() { } } } Mapping: <class name="ActivityCategory, Clients.Core" table='Activity_category'> <id name="Id" unsaved-value="0" type="int"> <column name="Id" not-null="true"/> <generator class="native"/> </id> <property name="Guid"/> <property name="Name"/> </class> But when I insert my entity: [Test] public void Test() { ActivityCategory ac = new ActivityCategory(); ac.Name = "Test"; using (var repo = new Repository<ActivityCategory>()) repo.Save(ac); } I always get '00000000-0000-0000-0000-000000000000' at my Guid field. What should I do for generate right Guid. May be mapping? Thanks a lot!

    Read the article

  • How to delete child object in NHibernate?

    - by Mark Struzinski
    I have a parent object which has a one to many relationship with an IList of child objects. What is the best way to delete the child objects? I am not deleting the parent. My parent object contains an IList of child objects. Here is the mapping for the one to many relationship: <bag name="Tiers" cascade="all"> <key column="mismatch_id_no" /> <one-to-many class="TGR_BL.PromoTier,TGR_BL"/> </bag> If I try to remove all objects from the collection using clear(), then call SaveOrUpdate(), I get this exception: System.Data.SqlClient.SqlException: Cannot insert the value NULL into column If I try to delete the child objects individually then remove them from the parent, I get an exception: deleted object would be re-saved by cascade This is my first time dealing with deleting child objects in NHibernate. What am I doing wrong? edit: Just to clarify - I'm NOT trying to delete the parent object, just the child objects. I have the relationship set up as a one to many on the parent. Do I also need to create a many-to-one relationship on the child object mapping?

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >