Search Results

Search found 31696 results on 1268 pages for 'client side validation'.

Page 273/1268 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • How to add condition on multiple-join table

    - by Jean-Philippe
    Hi, I have those two tables: client: id (int) #PK name (varchar) client_category: id (int) #PK client_id (int) category (int) Let's say I have those datas: client: {(1, "JP"), (2, "Simon")} client_category: {(1, 1, 1), (2, 1, 2), (3, 1, 3), (4,2,2)} tl;dr client #1 has category 1, 2, 3 and client #2 has only category 2 I am trying to build a query that would allow me to search multiple categories. For example, I would like to search every clients that has at least category 1 and 2 (would return client #1). How can I achieve that? Thanks!

    Read the article

  • Is there an example of checking on if a WCF Service is online?

    - by apolfj
    I will have a client application using a proxy to a WCF Service. This client will be a windows form application doing basicHttpBinding to N number of endpoints at an address. The problem I want to resolve is that when any windows form application going across the internet to get my web server that must have my web server online will need to know that this particular WCF Service is online. I need an example of how this client on a background thread will be able to do a polling of just "WCF Service.., Are You There?" This way our client application can notify clients before they invest a lot of time in building up work client-side to only be frustrated with the WCF Service being offline. Again I am looking for a simple way to check for WCF Service "Are You There?"

    Read the article

  • Is it possible to run C++ binded with SDL+OpenGL code on a web browser?

    - by unknownthreat
    My client wants her website to have an application that renders 3D (light 3D stuff, we are drawing only flat squares in 3D world) but web programming is not my thing. So I am looking for something that can run a C++ program from a web browser. But I think, if this is the case, then the client side must download the program first, and that's not what I want. The client should only be able to use this application only on the website. I came across Google Native Client, which claims that it can run x86 native code in web applications. I haven't decide whether it is worth it or not and I don't know whether this is what I want or not, so I decided to ask experienced people about this. If I want to have something like this, is what I said above possible? Or I completely need other languages like Flex because it does not worth the trouble? Or is Google Native Client suitable for doing something like this?

    Read the article

  • How to hide keys in application?

    - by WilliamKF
    I have a C++ client/server application where the client and server are my executable. Each time a connection is made between the client and server, I generate a new encryption key for that session and I wish to transmit this session key and encrypt this session key using a static key that is built into both the client and server. However, running strings on my executable reveals the static key. How can I hide the embedded static key in my client and server application so that they are not easily extracted and thus allowing someone to decode my session key.

    Read the article

  • implementing keepalives with Java

    - by Bilal
    Hi All, I am biulding a client-server application where I have to implement a keepalive mechanism in order to detect that the client has crashed or not. I have separate threads on both client and server side. the client thread sends a "ping" then sleeps for 3 seconds, while the server reads the BufferedInput Stream and checks whether ping is received, if so it makes the ping counter eqauls zero, else it increments the counter by +1, the server thread then sleeps for 3 seconds, if the ping counter reaches 3, it daclares the client as dead. The problem is that when the server reads the input stream, its a blocking call, and it blocks untill the next ping is received, irrespective of how delayed it is, so the server never detects a missed ping. any suggestions, so that I can read the current value of the stream and it doesn't block if there is nothing on the incoming stream. Thanks,

    Read the article

  • How could I use AJAX to create a Json data source .txt file?

    - by Adam
    I'm creating a form that collects standard information about customers. When the user hits save, I would like to create a .txt file that would be used to later retrieve all of the data collected from customers. I'm using DataTables which is a jQuery plugin to display the data. The .txt file would be formatted to be saved as such: { "aaData": [ ["client 1 name","address","city","state","zip"], ["client 2 name","address","city","state","zip"], ["client 3 name","address","city","state","zip"], ... ["client x name","address","city","state","zip"] ] } Where "aaData": is used by DataTables. This is going to part of an iPhone app, so the data source has to be very small and not reliant on a constant connection to a server, so, essentially, a client-side data source. The .txt file has to also be updated when edited and saved, and then replaced every time it is downloaded.

    Read the article

  • What are my alternatives to manage Python packages for clients?

    - by c00kiemonster
    So the setup is a slew of proprietary server/client Python applications running on one Linux box (the server) and a set of Windows 7 workstations (the clients). Everything is running smoothly until any of the proprietary Python packages needs updating. For now I am using distutils eggs which are very easily updated with easy_install, but it is still a manual process which quickly becomes tedious as the number of applications and client workstations grow. The ideal setup IMHO is to have the Python packages on the server so when a client application is launched on a workstation the client application can check to see whether its current Python packages are up-to-date. If not, the client application should download the newer Python package from the server, install it, and then launch as per normal. Does this sounds familiar to anyone? I have tried to find alternatives myself, but as far as I can see there is no Python module offering this functionality. Does anyone have any home made solutions for this?

    Read the article

  • Ideal way/architecture to deliver large data over Web Services

    - by zengr
    We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing. Now, the problem is, there is not 1 WS we are implementing, there is one WS which the client component hits, this initiates a series (5 more) of WSs which gather data from their respective data stores and finally provide the data back to the original WS, which then delivers the data back to the client component. So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel. So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal WS and at the same time, also delivering the data to the client component.

    Read the article

  • Customize iphone app for different clients

    - by gnibbler
    I have an existing app that needs to be compiled for different clients Each client requires their own icon and splash screen. I would also like to be able to conditionally include various features depending whether the particular client requires them or not. I have tried setting up different targets for each client, but not having much luck so far. The different resources with the same name, but a different path keep getting mixed up. Ideally I would like to be able to build an app by duplicating another client that is similar and then just make the minimum number of changes to create the app for the new client. What is the best way to set this app up?

    Read the article

  • Can you use POST to run a query in Solr (/select)

    - by RyanFetz
    I have queries that I am running against out solr index that sometimes have very long query parameters, I get errors when i run these queries, which i assume are do to the limit of a GET query parameters. Here is the method I use to query (JSON), this is to show I am using the Http Extensions (the client i use is a thin wrapper for HttpClient) not an end to end solution. 90% of the queries run fine, it is just when the params are large i get the 500 error from solr. I have read somewhere you can use POSt's when doing the select command but have not found examples of how to do it. Any help would be fantastic! public string GetJson(HttpQueryString qs) { using (var client = new DAC.US.Web.XmlHttpServiceClient(this.Uri)) { client.Client.DefaultHeaders.Authorization = new Microsoft.Http.Headers.Credential("Basic", DAC.US.Encryption.Hash.WebServiceCredintials); qs.Add("wt", "json"); if (!String.IsNullOrEmpty(this.Version)) qs.Add("version", this.Version); using (var response = client.Get(new Uri(@"select/", UriKind.Relative), qs)) { return response.Content.ReadAsString(); } } }

    Read the article

  • Working with sockets in MFC

    - by fanq
    I'm trying to make a MFC application(client) that connects to a server on ("localhost",port 1234), the server replies to the client and the client reads from the server's response. The server is able to receive the data from the client and it sends the reply back to the socket from where it received it, but I am unable to read the reply from within the client. I am making a CAsyncSocket to connect to the server and send data and a CAsyncSocket with overloaded methods onAccet and onReceive to read the reply from the server. Please tell me what I'm doing wrong.

    Read the article

  • Game network physics collision

    - by Jonas Byström
    How to simulating two client-controlled vehicles colliding (sensibly) in a typical client/server setup for a network game? I did read this eminent blog post on how to do distributed network physics in general (without traditional client prediction), but this question is specifically on how to handle collisions of owned objects. Example Say client A is 20 ms ahead of server, client B 300 ms ahead of server (counting both latency and maximum jitter). This means that when the two vehicles collide, both clients will see the other as 320 ms behind - in the opposite direction of the velocity of the other vehicle. Head-to-head on a Swedish highway means a difference of 16 meters/17.5 yards! What not to try It is virtually impossible to extrapolate the positions, since I also have very complex vehicles with joints and bodies all over, which in turn have linear and angular positions, velocities and accelerations, not to mention states from user input.

    Read the article

  • Check for new mails in Hotmail using OpenPop.NET

    - by deadlock
    I was advised to use OpenPop lib to fetch my mail from hotmail. But I could'nt find any better way to check for new mail except disconnecting and reconnecting again. And there, I found a problem, Hotmail doesn't allow more than 1 pop3 login each 15 minutes. I want to know if it's possible to get the new mail without disconnecting and reconnecting to their pop3 server. Here is my code: Timer checker = new Timer(); checker.Interval = 1000; checker.Tick += new EventHandler(delegate { Pop3Client client = new Pop3Client(); client.Connect("pop3.live.com", 995, true); client.Authenticate("[email protected]", "myPassword"); label1.Text = client.GetMessageCount().ToString(); client.Disconnect(); }); checker.Start();

    Read the article

  • How can i do something like IList<T>.Contains(OtherObjectType) ?

    - by Noctris
    Hi, I have the following classes: Client ClientCacheMedia ( contains Client, Media and some other parameters so it is the link between the media and the client) Media where client contains an IList. Now what i would like to do, is have a way to check if this ilist contains a certain media so : Client.ClientCacheMedia.Contains(MyMedia) is there any way to let the IList accept media as an object to match ? ( i can easily override the Equals Property on ClientCacheMedia to check if the media passed is the one that the ClientCacheMedia.Media contains, it's just the Ilist that will not accept any other object on the Contains Method.

    Read the article

  • IPsec tunnel to Android device not created even though there is an IKE SA

    - by Quentin Swain
    I'm trying to configure a VPN tunnel between an Android device running 4.1 and a Fedora 17 Linux box running strongSwan 5.0. The device reports that it is connected and strongSwan statusall returns that there is an IKE SA, but doesn't display a tunnel. I used the instructions for iOS in the wiki to generate certificates and configure strongSwan. Since Android uses a modified version of racoon this should work and since the connection is partly established I think I am on the right track. I don't see any errors about not being able to create the tunnel. This is the configuration for the strongSwan connection conn android2 keyexchange=ikev1 authby=xauthrsasig xauth=server left=96.244.142.28 leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=%any rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem ike=aes256-sha1-modp1024 auto=add This is the output of strongswan statusall Status of IKE charon daemon (strongSwan 5.0.0, Linux 3.3.4-5.fc17.x86_64, x86_64): uptime: 20 minutes, since Oct 31 10:27:31 2012 malloc: sbrk 270336, mmap 0, used 198144, free 72192 worker threads: 8 of 16 idle, 7/1/0/0 working, job queue: 0/0/0/0, scheduled: 7 loaded plugins: charon aes des sha1 sha2 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs8 pgp dnskey pem openssl fips-prf gmp xcbc cmac hmac attr kernel-netlink resolve socket-default stroke updown xauth-generic Virtual IP pools (size/online/offline): android-hybrid: 1/0/0 android2: 1/1/0 Listening IP addresses: 96.244.142.28 Connections: android-hybrid: %any...%any IKEv1 android-hybrid: local: [C=CH, O=strongSwan, CN=vpn.strongswan.org] uses public key authentication android-hybrid: cert: "C=CH, O=strongSwan, CN=vpn.strongswan.org" android-hybrid: remote: [%any] uses XAuth authentication: any android-hybrid: child: dynamic === dynamic TUNNEL android2: 96.244.142.28...%any IKEv1 android2: local: [C=CH, O=strongSwan, CN=vpn.strongswan.org] uses public key authentication android2: cert: "C=CH, O=strongSwan, CN=vpn.strongswan.org" android2: remote: [C=CH, O=strongSwan, CN=client] uses public key authentication android2: cert: "C=CH, O=strongSwan, CN=client" android2: remote: [%any] uses XAuth authentication: any android2: child: 0.0.0.0/0 === 10.0.0.0/24 TUNNEL Security Associations (1 up, 0 connecting): android2[3]: ESTABLISHED 10 seconds ago, 96.244.142.28[C=CH, O=strongSwan, CN=vpn.strongswan.org]...208.54.35.241[C=CH, O=strongSwan, CN=client] android2[3]: Remote XAuth identity: android android2[3]: IKEv1 SPIs: 4151e371ad46b20d_i 59a56390d74792d2_r*, public key reauthentication in 56 minutes android2[3]: IKE proposal: AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024 The output of ip -s xfrm policy src ::/0 dst ::/0 uid 0 socket in action allow index 3851 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket out action allow index 3844 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket in action allow index 3835 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src ::/0 dst ::/0 uid 0 socket out action allow index 3828 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use - src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket in action allow index 3819 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:39 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket out action allow index 3812 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:22 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket in action allow index 3803 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:20 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 socket out action allow index 3796 priority 0 ptype main share any flag (0x00000000) lifetime config: limit: soft 0(bytes), hard 0(bytes) limit: soft 0(packets), hard 0(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:29:08 use 2012-10-31 13:29:20 So a xfrm policy isn't being created for the connection, even though there is an SA between device and strongswan. Executing ip -s xfrm policy on the android device results in the following output: src 0.0.0.0/0 dst 10.0.0.2/32 uid 0 dir in action allow index 40 priority 2147483648 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:08 use - tmpl src 96.244.142.28 dst 25.239.33.30 proto esp spi 0x00000000(0) reqid 0(0x00000000) mode tunnel level required share any enc-mask 00000000 auth-mask 00000000 comp-mask 00000000 src 10.0.0.2/32 dst 0.0.0.0/0 uid 0 dir out action allow index 33 priority 2147483648 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:08 use - tmpl src 25.239.33.30 dst 96.244.142.28 proto esp spi 0x00000000(0) reqid 0(0x00000000) mode tunnel level required share any enc-mask 00000000 auth-mask 00000000 comp-mask 00000000 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 4 action allow index 28 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:08 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 3 action allow index 19 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:08 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 4 action allow index 12 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:06 src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 dir 3 action allow index 3 priority 0 share any flag (0x00000000) lifetime config: limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: 0(bytes), 0(packets) add 2012-10-31 13:42:04 use 2012-10-31 13:42:07 Logs from charon: 00[DMN] Starting IKE charon daemon (strongSwan 5.0.0, Linux 3.3.4-5.fc17.x86_64, x86_64) 00[KNL] listening on interfaces: 00[KNL] em1 00[KNL] 96.244.142.28 00[KNL] fe80::224:e8ff:fed2:18b2 00[CFG] loading ca certificates from '/etc/strongswan/ipsec.d/cacerts' 00[CFG] loaded ca certificate "C=CH, O=strongSwan, CN=strongSwan CA" from '/etc/strongswan/ipsec.d/cacerts/caCert.pem' 00[CFG] loading aa certificates from '/etc/strongswan/ipsec.d/aacerts' 00[CFG] loading ocsp signer certificates from '/etc/strongswan/ipsec.d/ocspcerts' 00[CFG] loading attribute certificates from '/etc/strongswan/ipsec.d/acerts' 00[CFG] loading crls from '/etc/strongswan/ipsec.d/crls' 00[CFG] loading secrets from '/etc/strongswan/ipsec.secrets' 00[CFG] loaded RSA private key from '/etc/strongswan/ipsec.d/private/clientKey.pem' 00[CFG] loaded IKE secret for %any 00[CFG] loaded EAP secret for android 00[CFG] loaded EAP secret for android 00[DMN] loaded plugins: charon aes des sha1 sha2 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs8 pgp dnskey pem openssl fips-prf gmp xcbc cmac hmac attr kernel-netlink resolve socket-default stroke updown xauth-generic 08[NET] waiting for data on sockets 16[LIB] created thread 16 [15338] 16[JOB] started worker thread 16 11[CFG] received stroke: add connection 'android-hybrid' 11[CFG] conn android-hybrid 11[CFG] left=%any 11[CFG] leftsubnet=(null) 11[CFG] leftsourceip=(null) 11[CFG] leftauth=pubkey 11[CFG] leftauth2=(null) 11[CFG] leftid=(null) 11[CFG] leftid2=(null) 11[CFG] leftrsakey=(null) 11[CFG] leftcert=serverCert.pem 11[CFG] leftcert2=(null) 11[CFG] leftca=(null) 11[CFG] leftca2=(null) 11[CFG] leftgroups=(null) 11[CFG] leftupdown=ipsec _updown iptables 11[CFG] right=%any 11[CFG] rightsubnet=(null) 11[CFG] rightsourceip=96.244.142.3 11[CFG] rightauth=xauth 11[CFG] rightauth2=(null) 11[CFG] rightid=%any 11[CFG] rightid2=(null) 11[CFG] rightrsakey=(null) 11[CFG] rightcert=(null) 11[CFG] rightcert2=(null) 11[CFG] rightca=(null) 11[CFG] rightca2=(null) 11[CFG] rightgroups=(null) 11[CFG] rightupdown=(null) 11[CFG] eap_identity=(null) 11[CFG] aaa_identity=(null) 11[CFG] xauth_identity=(null) 11[CFG] ike=aes256-sha1-modp1024 11[CFG] esp=aes128-sha1-modp2048,3des-sha1-modp1536 11[CFG] dpddelay=30 11[CFG] dpdtimeout=150 11[CFG] dpdaction=0 11[CFG] closeaction=0 11[CFG] mediation=no 11[CFG] mediated_by=(null) 11[CFG] me_peerid=(null) 11[CFG] keyexchange=ikev1 11[KNL] getting interface name for %any 11[KNL] %any is not a local address 11[KNL] getting interface name for %any 11[KNL] %any is not a local address 11[CFG] left nor right host is our side, assuming left=local 11[CFG] loaded certificate "C=CH, O=strongSwan, CN=vpn.strongswan.org" from 'serverCert.pem' 11[CFG] id '%any' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=vpn.strongswan.org' 11[CFG] added configuration 'android-hybrid' 11[CFG] adding virtual IP address pool 'android-hybrid': 96.244.142.3/32 13[CFG] received stroke: add connection 'android2' 13[CFG] conn android2 13[CFG] left=96.244.142.28 13[CFG] leftsubnet=0.0.0.0/0 13[CFG] leftsourceip=(null) 13[CFG] leftauth=pubkey 13[CFG] leftauth2=(null) 13[CFG] leftid=(null) 13[CFG] leftid2=(null) 13[CFG] leftrsakey=(null) 13[CFG] leftcert=serverCert.pem 13[CFG] leftcert2=(null) 13[CFG] leftca=(null) 13[CFG] leftca2=(null) 13[CFG] leftgroups=(null) 13[CFG] leftupdown=ipsec _updown iptables 13[CFG] right=%any 13[CFG] rightsubnet=10.0.0.0/24 13[CFG] rightsourceip=10.0.0.2 13[CFG] rightauth=pubkey 13[CFG] rightauth2=xauth 13[CFG] rightid=(null) 13[CFG] rightid2=(null) 13[CFG] rightrsakey=(null) 13[CFG] rightcert=clientCert.pem 13[CFG] rightcert2=(null) 13[CFG] rightca=(null) 13[CFG] rightca2=(null) 13[CFG] rightgroups=(null) 13[CFG] rightupdown=(null) 13[CFG] eap_identity=(null) 13[CFG] aaa_identity=(null) 13[CFG] xauth_identity=(null) 13[CFG] ike=aes256-sha1-modp1024 13[CFG] esp=aes128-sha1-modp2048,3des-sha1-modp1536 13[CFG] dpddelay=30 13[CFG] dpdtimeout=150 13[CFG] dpdaction=0 13[CFG] closeaction=0 13[CFG] mediation=no 13[CFG] mediated_by=(null) 13[CFG] me_peerid=(null) 13[CFG] keyexchange=ikev0 13[KNL] getting interface name for %any 13[KNL] %any is not a local address 13[KNL] getting interface name for 96.244.142.28 13[KNL] 96.244.142.28 is on interface em1 13[CFG] loaded certificate "C=CH, O=strongSwan, CN=vpn.strongswan.org" from 'serverCert.pem' 13[CFG] id '96.244.142.28' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=vpn.strongswan.org' 13[CFG] loaded certificate "C=CH, O=strongSwan, CN=client" from 'clientCert.pem' 13[CFG] id '%any' not confirmed by certificate, defaulting to 'C=CH, O=strongSwan, CN=client' 13[CFG] added configuration 'android2' 13[CFG] adding virtual IP address pool 'android2': 10.0.0.2/32 08[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 15[CFG] looking for an ike config for 96.244.142.28...208.54.35.241 15[CFG] candidate: %any...%any, prio 2 15[CFG] candidate: 96.244.142.28...%any, prio 5 15[CFG] found matching ike config: 96.244.142.28...%any with prio 5 01[JOB] next event in 29s 999ms, waiting 15[IKE] received NAT-T (RFC 3947) vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-02 vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-02\n vendor ID 15[IKE] received draft-ietf-ipsec-nat-t-ike-00 vendor ID 15[IKE] received XAuth vendor ID 15[IKE] received Cisco Unity vendor ID 15[IKE] received DPD vendor ID 15[IKE] 208.54.35.241 is initiating a Main Mode IKE_SA 15[IKE] IKE_SA (unnamed)[1] state change: CREATED => CONNECTING 15[CFG] selecting proposal: 15[CFG] proposal matches 15[CFG] received proposals: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_256/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:3DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:3DES_CBC/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024, IKE:DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:DES_CBC/HMAC_MD5_96/PRF_HMAC_MD5/MODP_1024 15[CFG] configured proposals: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/AES_CBC_192/AES_CBC_256/3DES_CBC/CAMELLIA_CBC_128/CAMELLIA_CBC_192/CAMELLIA_CBC_256/HMAC_MD5_96/HMAC_SHA1_96/HMAC_SHA2_256_128/HMAC_SHA2_384_192/HMAC_SHA2_512_256/AES_XCBC_96/AES_CMAC_96/PRF_HMAC_MD5/PRF_HMAC_SHA1/PRF_HMAC_SHA2_256/PRF_HMAC_SHA2_384/PRF_HMAC_SHA2_512/PRF_AES128_XCBC/PRF_AES128_CMAC/MODP_2048/MODP_2048_224/MODP_2048_256/MODP_1536/MODP_4096/MODP_8192/MODP_1024/MODP_1024_160 15[CFG] selected proposal: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024 15[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 04[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 15[MGR] checkin IKE_SA (unnamed)[1] 15[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 08[NET] waiting for data on sockets 07[MGR] checkout IKE_SA by message 07[MGR] IKE_SA (unnamed)[1] successfully checked out 07[NET] received packet: from 208.54.35.241[32235] to 96.244.142.28[500] 07[LIB] size of DH secret exponent: 1023 bits 07[IKE] remote host is behind NAT 07[IKE] sending cert request for "C=CH, O=strongSwan, CN=strongSwan CA" 07[ENC] generating NAT_D_V1 payload finished 07[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 07[MGR] checkin IKE_SA (unnamed)[1] 07[MGR] check-in of IKE_SA successful. 04[NET] sending packet: from 96.244.142.28[500] to 208.54.35.241[32235] 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 10[IKE] ignoring certificate request without data 10[IKE] received end entity cert "C=CH, O=strongSwan, CN=client" 10[CFG] looking for XAuthInitRSA peer configs matching 96.244.142.28...208.54.35.241[C=CH, O=strongSwan, CN=client] 10[CFG] candidate "android-hybrid", match: 1/1/2/2 (me/other/ike/version) 10[CFG] candidate "android2", match: 1/20/5/1 (me/other/ike/version) 10[CFG] selected peer config "android2" 10[CFG] certificate "C=CH, O=strongSwan, CN=client" key: 2048 bit RSA 10[CFG] using trusted ca certificate "C=CH, O=strongSwan, CN=strongSwan CA" 10[CFG] checking certificate status of "C=CH, O=strongSwan, CN=client" 10[CFG] ocsp check skipped, no ocsp found 10[CFG] certificate status is not available 10[CFG] certificate "C=CH, O=strongSwan, CN=strongSwan CA" key: 2048 bit RSA 10[CFG] reached self-signed root ca with a path length of 0 10[CFG] using trusted certificate "C=CH, O=strongSwan, CN=client" 10[IKE] authentication of 'C=CH, O=strongSwan, CN=client' with RSA successful 10[ENC] added payload of type ID_V1 to message 10[ENC] added payload of type SIGNATURE_V1 to message 10[IKE] authentication of 'C=CH, O=strongSwan, CN=vpn.strongswan.org' (myself) successful 10[IKE] queueing XAUTH task 10[IKE] sending end entity cert "C=CH, O=strongSwan, CN=vpn.strongswan.org" 10[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 10[IKE] activating new tasks 10[IKE] activating XAUTH task 10[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 01[JOB] next event in 3s 999ms, waiting 10[MGR] checkin IKE_SA android2[1] 10[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 08[NET] waiting for data on sockets 12[MGR] checkout IKE_SA by message 12[MGR] IKE_SA android2[1] successfully checked out 12[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 12[MGR] checkin IKE_SA android2[1] 12[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 16[MGR] checkout IKE_SA by message 16[MGR] IKE_SA android2[1] successfully checked out 16[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 08[NET] waiting for data on sockets 16[IKE] XAuth authentication of 'android' successful 16[IKE] reinitiating already active tasks 16[IKE] XAUTH task 16[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 16[MGR] checkin IKE_SA android2[1] 01[JOB] next event in 3s 907ms, waiting 16[MGR] check-in of IKE_SA successful. 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 09[MGR] checkout IKE_SA by message 09[MGR] IKE_SA android2[1] successfully checked out 09[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] .8rS 09[IKE] IKE_SA android2[1] established between 96.244.142.28[C=CH, O=strongSwan, CN=vpn.strongswan.org]...208.54.35.241[C=CH, O=strongSwan, CN=client] 09[IKE] IKE_SA android2[1] state change: CONNECTING => ESTABLISHED 09[IKE] scheduling reauthentication in 3409s 09[IKE] maximum IKE_SA lifetime 3589s 09[IKE] activating new tasks 09[IKE] nothing to initiate 09[MGR] checkin IKE_SA android2[1] 09[MGR] check-in of IKE_SA successful. 09[MGR] checkout IKE_SA 09[MGR] IKE_SA android2[1] successfully checked out 09[MGR] checkin IKE_SA android2[1] 09[MGR] check-in of IKE_SA successful. 01[JOB] next event in 3s 854ms, waiting 08[NET] waiting for data on sockets 08[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 14[MGR] checkout IKE_SA by message 14[MGR] IKE_SA android2[1] successfully checked out 14[NET] received packet: from 208.54.35.241[35595] to 96.244.142.28[4500] 14[IKE] processing INTERNAL_IP4_ADDRESS attribute 14[IKE] processing INTERNAL_IP4_NETMASK attribute 14[IKE] processing INTERNAL_IP4_DNS attribute 14[IKE] processing INTERNAL_IP4_NBNS attribute 14[IKE] processing UNITY_BANNER attribute 14[IKE] processing UNITY_DEF_DOMAIN attribute 14[IKE] processing UNITY_SPLITDNS_NAME attribute 14[IKE] processing UNITY_SPLIT_INCLUDE attribute 14[IKE] processing UNITY_LOCAL_LAN attribute 14[IKE] processing APPLICATION_VERSION attribute 14[IKE] peer requested virtual IP %any 14[CFG] assigning new lease to 'android' 14[IKE] assigning virtual IP 10.0.0.2 to peer 'android' 14[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 14[MGR] checkin IKE_SA android2[1] 14[MGR] check-in of IKE_SA successful. 04[NET] sending packet: from 96.244.142.28[4500] to 208.54.35.241[35595] 08[NET] waiting for data on sockets 01[JOB] got event, queuing job for execution 01[JOB] next event in 91ms, waiting 13[MGR] checkout IKE_SA 13[MGR] IKE_SA android2[1] successfully checked out 13[MGR] checkin IKE_SA android2[1] 13[MGR] check-in of IKE_SA successful. 01[JOB] got event, queuing job for execution 01[JOB] next event in 24s 136ms, waiting 15[MGR] checkout IKE_SA 15[MGR] IKE_SA android2[1] successfully checked out 15[MGR] checkin IKE_SA android2[1] 15[MGR] check-in of IKE_SA successful.

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • EM12c Release 4: New Compliance features including DB STIG Standard

    - by DaveWolf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their target configurations against a specified standard. Enterprise Manager’s compliance library is filled with a wide variety of standards based on Oracle’s recommendations, best practices and security guidelines. These standards can be easily associated to a target to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. ) Starting with release 12.1.0.4 of Enterprise Manager the compliance library will contain a new standard based on the US Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Oracle Database 11g. According to the DISA website, “The STIGs contain technical guidance to ‘lock down’ information systems/software that might otherwise be vulnerable to a malicious computer attack.” In essence, a STIG is a technical checklist an administrator can follow to secure a system or software. Many US government entities are required to follow these standards however many non-US government entities and commercial companies base their standards directly or partially on these STIGs. You can find more information about the Oracle Database and other STIGs on the DISA website. The Oracle Database 11g STIG consists of two categories of checks, installation and instance. Installation checks focus primarily on the security of the Oracle Home while the instance checks focus on the configuration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, you will see the rules organized into folders corresponding to these categories. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The rule names contain a rule ID ( DG0020 for example ) which directly map to the check name in the STIG checklist along with a helpful brief description. The actual description field contains the text from the STIG documentation to aid in understanding the purpose of the check. All of the rules have also been documented in the Oracle Database Compliance Standards reference documentation. In order to use this standard both the OMS and agent must be at version 12.1.0.4 as it takes advantage of several features new in this release including: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Manual Compliance Rules Violation Suppression Additional BI Publisher Compliance Reports /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-side compliance rules are essentially the result of a tighter integration between Configuration Extensions and Compliance Rules. If you ever created customer compliance content in past versions of Enterprise Manager, you likely used Configuration Extensions to collect additional information into the EM repository so it could be used in a Repository compliance rule. This process although powerful, could be confusing to correctly model the SQL in the rule creation wizard. With agent-side rules, the user only needs to choose the Configuration Extension/Alias combination and that’s it. Enterprise Manager will do the rest for you. This tighter integration also means their lifecycle is managed together. When you associate an agent-side compliance standard to a target, the required Configuration Extensions will be deployed automatically for you. The opposite is also true, when you unassociated the compliance standard, the Configuration Extensions will also be undeployed. The Oracle Database STIG compliance standard is implemented as an agent-side standard which is why you simply need to associate the standard to your database targets without previously deploying the associated Configuration Extensions. You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Manual Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There are many checks in the Oracle Database STIG as well as other common standards which simply cannot be automated. This could be something as simple as “Ensure the datacenter entrance is secured.” or complex as Oracle Database STIG Rule DG0186 – “The database should not be directly accessible from public or unauthorized networks”. These checks require a human to perform and attest to its successful completion. Enterprise Manager now supports these types of checks in Manual rules. When first associated to a target, each manual rule will generate a single violation. These violations must be manually cleared by a user who is in essence attesting to its successful completion. The user is able to permanently clear the violation or give a future date on which the violation will be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Violation Suppression There are situations that require the need to permanently or temporarily suppress a legitimate violation or finding. These include approved exceptions and grace periods. Enterprise Manager now supports the ability to temporarily or permanently suppress a violation. Unlike when you clear a manual rule violation, suppression simply removes the violation from the compliance results UI and in turn its negative impact on the score. The violation still remains in the EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can give users a grace period in which to address an issue. If the issue is not addressed within the specified period, the violation will reappear in the results automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Additional BI Publisher compliance reports As I am sure you have learned by now, BI Publisher now ships and is integrated with Enterprise Manager 12.1.0.4. This means users can take full advantage of the powerful reporting engine by using the Oracle provided reports or building their own. There are many new compliance related reports available in 12.1.0.4 covering all aspects including the association status, library as well as summary and detailed results reports.  10 New Compliance Reports Compliance Summary Report Example showing STIG results Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Conclusion Together with the Oracle Database 11g STIG compliance standard these features provide a complete solution for easily auditing and reporting the security posture of your Oracle Databases against this well known benchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN. Additional EM12c Compliance Management Information Compliance Management - Overview ( Presentation ) Compliance Management - Custom Compliance on Default Data (How To) Compliance Management - Custom Compliance using SQL Configuration Extension (How To) Compliance Management - Customer Compliance using Command Configuration Extension (How To)

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • The Microsoft Ajax Library and Visual Studio Beta 2

    - by Stephen Walther
    Visual Studio 2010 Beta 2 was released this week and one of the first things that I hope you notice is that it no longer contains the latest version of ASP.NET AJAX. What happened? Where did AJAX go? Just like Sting and The Police, just like Phil Collins and Genesis, just like Greg Page and the Wiggles, AJAX has gone out of band! We are starting a solo career. A Name Change First things first. In previous releases, our Ajax framework was named ASP.NET AJAX. We now have changed the name of the framework to the Microsoft Ajax Library. There are two reasons behind this name change. First, the members of the Ajax team got tired of explaining to everyone that our Ajax framework is not tied to the server-side ASP.NET framework. You can use the Microsoft Ajax Library with ASP.NET Web Forms, ASP.NET MVC, PHP, Ruby on RAILS, and even pure HTML applications. Our framework can be used as a client-only framework and having the word ASP.NET in our name was confusing people. Second, it was time to start spelling the word Ajax like everyone else. Notice that the name is the Microsoft Ajax Library and not the Microsoft AJAX library. Originally, Microsoft used upper case AJAX because AJAX originally was an acronym for Asynchronous JavaScript and XML. And, according to Strunk and Wagnell, acronyms should be all uppercase. However, Ajax is one of those words that have migrated from acronym status to “just a word” status. So whenever you hear one of your co-workers talk about ASP.NET AJAX, gently correct your co-worker and say “It is now called the Microsoft Ajax Library.” Why OOB? But why move out-of-band (OOB)? The short answer is that we have had approximately 6 preview releases of the Microsoft Ajax Library over the last year. That’s a lot. We pride ourselves on being agile. Client-side technology evolves quickly. We want to be able to get a preview version of the Microsoft Ajax Library out to our customers, get feedback, and make changes to the library quickly. Shipping the Microsoft Ajax Library out-of-band keeps us agile and enables us to continue to ship new versions of the library even after ASP.NET 4 ships. Showing Love for JavaScript Developers One area in which we have received a lot of feedback is around making the Microsoft Ajax Library easier to use for developers who are comfortable with JavaScript. We also wanted to make it easy for jQuery developers to take advantage of the innovative features of the Microsoft Ajax Library. To achieve these goals, we’ve added the following features to the Microsoft Ajax Library (these features are included in the latest preview release that you can download right now): A simplified imperative syntax – We wanted to make it brain-dead simple to create client-side Ajax controls when writing JavaScript. A client script loader – We wanted the Microsoft Ajax Library to load all of the scripts required by a component or control automatically. jQuery integration – We love the jQuery selector syntax. We wanted to make it easy for jQuery developers to use the Microsoft Ajax Library without changing their programming style. If you are interested in learning about these new features of the Microsoft Ajax Library, I recommend that you read the following blog post by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2009/10/15/announcing-microsoft-ajax-library-preview-6-and-the-microsoft-ajax-minifier.aspx Downloading the Latest Version of the Microsoft Ajax Library Currently, the best place to download the latest version of the Microsoft Ajax Library is directly from the ASP.NET CodePlex project: http://aspnet.codeplex.com/ As I write this, the current version is Preview 6. The next version is coming out at the PDC. Summary I’m really excited about the future of the Microsoft Ajax Library. Moving outside of the ASP.NET framework provides us the flexibility to remain agile and continue to innovate aggressively. The latest preview release of the Microsoft Ajax Library includes several major new features including a client script loader, jQuery integration, and a simplified client control creation syntax.

    Read the article

  • Organizational characteristics that impact the selection of Development Methodology concepts applied to a project

    Based on my experience, no one really follows a specific methodology exactly as it is formally designed. In fact, the key concepts of a few methodologies are usually combined to form a hybrid methodology for each project based on the current organizational makeup and the project need/requirements to be accomplished. Organizational characteristics that impact the selection of methodology concepts applied to a project. Prior subject knowledge pertaining to a project can be critical when deciding on what methodology or combination of methodologies to apply to a project. For example, if a project is very straight forward, and the development staff has experience in developing  that are similar, then the waterfall method could possibly be the best choice because little to no research is needed  in order to complete the project tasks and there is very little need for changes to occur.  On the other hand, if the development staff has limited subject knowledge or the requirements/specification of the project could possibly change as the project progresses then the use of spiral, iterative, incremental, agile, or any combination would be preferred. The previous methodologies used by an organization typically do not change much from project to project unless the needs of a project dictate differently. For example, if the waterfall method is the preferred development methodology then most projects will be developed by the waterfall method. Depending on the time allotted to a project each day can impact the selection of a development methodology. In one example, if the staff can only devote a few hours a day to a project then the incremental methodology might be ideal because modules can be added to the final project as they are developed. On the other hand, if daily time allocation is not an issue, then a multitude of methodologies could work well for a project. Project characteristics that impact the selection of methodology concepts applied to a project. The type of project being developed can often dictate the type of methodology used for the project. Based on my experience, projects that tend to have a lot of user interaction, follow a more iterative, incremental, or agile approach typically using a prototype that develops into a final project. These methodologies desire back and forth communication between users, clients, and developers to allow for requirements to change and functionality to be enhanced. Conversely, limited interaction applications or automated services can still sometimes get away with using the waterfall or transactional approach. The timeline of a project can also force an organization to prefer a particular methodology over the rest. For instance, if the project must be completed within 24 hours, then there is very little time for discussions back and forth between clients, users and the development team. In this scenario, the waterfall method would be perfect because the only interaction with the client occurs prior to a development project to outline the system requirements, and the development team can quickly move through the software development stages in order to complete the project within the deadline. If the team had more time, then the other methodologies could also be considered because there is more time for client and users to review the project and make changes as they see fit, and/or allow for more time to review the project in order to enhance the business performance and functionality. Sometimes the client and or user involvement can dictate the selection of methodologies applied to a project. One example of this is if a client is highly motivated to get a project completed and desires to play an active part in the development process then the agile development approach would work perfectly with this client because it allows for frequent interaction between clients, users and the development team. The inverse of this situation is a client that just wants to provide the project requirements and only wants to get involved when the project is to be delivered. In this case the waterfall method would work well because there is no room for changes and no back and forth between the users, clients or the development team.

    Read the article

  • How to Create SharePoint List and Insert List Item programmatically from a Windows Forms Application.

    - by Michael M. Bangoy
    In this post I’m going to demonstrate how to create SharePoint List and also Insert Items on the List from a Windows Forms Application. 1. Open Visual Studio and create a new project. On the project template select Windows Form Application under C#. 2. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI.  3. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 4. Open the Form1 in design view and from the Toolbox menu add a button on the form surface. Your form should look like the one below. 5. Double click the button to open the code view. Add Using statement to reference the Sharepoint Client Library then create method for the Create List. Your code should like the codes below. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client; namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "urlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void cmdcreate_Click(object sender, EventArgs e)         {             try             {                 // declare the ClientContext Object                 SP.ClientContext _clientcontext = new SP.ClientContext(_context);                 SP.Web _site = _clientcontext.Web;                 // declare a ListCreationInfo                 SP.ListCreationInformation _listcreationinfo = new SP.ListCreationInformation();                 // set the Title and the Template of the List to be created                 _listcreationinfo.Title = "NewListFromCOM";                 _listcreationinfo.TemplateType = (int)SP.ListTemplateType.GenericList;                 // Call the add method to the ListCreatedInfo                 SP.List _list = _site.Lists.Add(_listcreationinfo);                 // Add Description field to the List                 SP.Field _Description = _list.Fields.AddFieldAsXml(@"                                     <Field Type='Text'                                         DisplayName='Description'>                                     </Field>", true, SP.AddFieldOptions.AddToDefaultContentType);                 // declare the List item Creation object for creating List Item                 SP.ListItemCreationInformation _itemcreationinfo = new SP.ListItemCreationInformation();                 // call the additem method of the list to insert a new List Item                 SP.ListItem _item = _list.AddItem(_itemcreationinfo);                 _item["Title"] = "New Item from Client Object Model";                 _item["Description"] = "This item was added by a Windows Forms Application";                 // call the update method                 _item.Update();                 // execute the query of the clientcontext                 _clientcontext.ExecuteQuery();                 // dispose the clientcontext                 _clientcontext.Dispose();                 MessageBox.Show("List Creation Successfull");             }             catch(Exception ex)             {                 MessageBox.Show("Error creating list" + ex.ToString());             }          }     } } 6. Hit F5 to run the application. A message will be displayed on the screen if the operation is successful and also if it fails. 7. To make that the operation of our Windows Form Application has really created the List and Inserted an item on it. Let’s open our SharePoint site. Once the SharePoint is open click on the Site Actions then View All Site Content. 7. Click the List to open it and check if an Item is inserted. That’s it. Hope this helps.

    Read the article

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • Design pattern for logging changes in parent/child objects saved to database

    - by andrew
    I’ve got a 2 database tables in parent/child relationship as one-many. I’ve got three classes representing the data in these two tables: Parent Class { Public int ID {get; set;} .. other properties } Child Class { Public int ID {get;set;} Public int ParentID {get; set;} .. other properties } TogetherClass { Public Parent Parent; Public List<Child> ChildList; } Lastly I’ve got a client and server application – I’m in control of both ends so can make changes to both programs as I need to. Client makes a request for ParentID and receives a Together Class for the matching parent, and all of the child records. The client app may make changes to the children – add new children, remove or modify existing ones. Client app then sends the Together Class back to the server app. Server app needs to update the parent and child records in the database. In addition I would like to be able to log the changes – I’m doing this by having 2 separate tables one for Parent, one for child; each containing the same columns as the original plus date time modified, by whom and a list of the changes. I’m unsure as to the best approach to detect the changes in records – new records, records to be deleted, records with no fields changed, records with some fields changed. I figure I need to read the parent & children records and compare those to the ones in the Together Class. Strategy A: If Together class’s child record has an ID of say 0, that indicates a new record; insert. Any deleted child records are no longer in the Together Class; see if any of the comparison child records are not found in the Together class and delete if not found (Compare using ID). Check each child record for changes and if changed log. Strategy B: Make a new Updated TogetherClass UpdatedClass { Public Parent Parent {get; set} Public List<Child> ListNewChild {get;set;} Public List<Child> DeletedChild {get;set;} Public List<Child> ExistingChild {get;set;} // used for no changes and modified rows } And then process as per the list. The reason why I’m asking for ideas is that both of these solutions don’t seem optimal to me and I suspect this problem has been solved already – some kind of design pattern ? I am aware of one potential problem in this general approach – that where Client App A requests a record; App B requests same record; A then saves changes; B then saves changes which may overwrite changes A made. This is a separate locking issue which I’ll raise a separate question for if I’ve got trouble implementing. The actual implementation is c#, SQL Server and WCF between client and server - sharing a library containing the class implementations. Apologies if this is a duplicate post – I tried searching various terms without finding a match though.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >