Search Results

Search found 42993 results on 1720 pages for 'static method'.

Page 1598/1720 | < Previous Page | 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605  | Next Page >

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • Dynamic Bursting ... no really!

    - by Tim Dexter
    If any of you have seen me or my colleagues present BI Publisher to you then we have hopefully mentioned 'bursting.' You may have even seen a demo where we talk about being able to take a batch of data, say invoices. Then split them by some criteria, say customer id; format them with a template; generate the output and then deliver the documents to the recipients with a click. We and especially I, always say this can be completely dynamic! By this I mean, that you could store customer preferences in a database. What layout would each customer like; what output format they would like and how they would like the document delivered. We (I) talk a good talk, but typically don't do the walk in a demo. We hard code everything in the bursting query or bursting control file to get the concept across. But no more peeps! I have finally put together a dynamic bursting demo! Its been minutes in the making but its been tough to find those minutes! Read on ... It's nothing amazing in terms of making the burst dynamic. I created a CUSTOMER_PREFS table with some simple UI in an APEX application so that I can maintain their requirements. In EBS you have descriptive flexfields that could do the same thing or probably even 'contact' fields to store most of the info. Here's my table structure: Name                           Type ------------------------------ -------- CUSTOMER_ID                    NUMBER(6) TEMPLATE_TYPE                  VARCHAR2(20) TEMPLATE_NAME                  VARCHAR2(120) OUTPUT_FORMAT                  VARCHAR2(20) DELIVERY_CHANNEL               VARCHAR2(50) EMAIL                          VARCHAR2(255) FAX                            VARCHAR2(20) ATTACH                         VARCHAR2(20) FILE_LOC                       VARCHAR2(255) Simple enough right? Just need CUSTOMER_ID as the key for the bursting engine to join it to the customer data at burst time. I have not covered the full delivery options, just email, fax and file location. Remember, its a demo people :0) However the principal is exactly the same for each delivery type. They each have a set of attributes that need to be provided and you will need to handle that in your bursting query. On a side note, in EBS, you use a bursting control file, you can apply the same principals that I'm laying out here you just need to get the customer bursting info into the XML data stream so that you can refer to it in the control file using XPATH expressions. Next, we need to look up what attributes or parameters are required for each delivery method. that can be found in the documentation here.  Now we know the combinations of parameters and delivery methods we can construct the query using a series a decode statements: select distinct cp.customer_id "KEY", cp.template_name TEMPLATE, cp.template_type TEMPLATE_FORMAT, 'en-US' LOCALE, cp.output_format OUTPUT_FORMAT, 'false' SAVE_FORMAT, cp.delivery_channel DEL_CHANNEL, decode(cp.delivery_channel,'FILE', cp.file_loc , 'EMAIL', cp.email , 'FAX', cp.fax) PARAMETER1, decode(cp.delivery_channel,'FILE', c.cust_last_name||'_orders.pdf' ,'EMAIL','[email protected]' ,'FAX', 'faxserver.com') PARAMETER2, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX', null) PARAMETER3, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Your current orders' ,'FAX',NULL) PARAMETER4, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Please find attached a copy of your current orders with BI Publisher, Inc' ,'FAX',NULL) PARAMETER5, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','false' ,'FAX',NULL) PARAMETER6, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX',NULL) PARAMETER7 from cust_prefs cp, customers c, orders_view ov where cp.customer_id = c.customer_id and cp.customer_id = ov.customer_id order by cp.customer_id Pretty straightforward, just need to test, test, test, the query and ensure it's bringing back the correct data based on each customers preferences. Notice the NULL values for parameters that are not relevant for a given delivery channel. You should end up with bursting control data that the bursting engine can use:  Now, your users can run the burst and documents will be formatted, generated and delivered based on the customer prefs. If you're interested in the example, I have used the sample OE schema data for the base report. The report files and CUST_PREFS table are zipped up here. The zip contains the data model (.xdmz), the report and templates (.xdoz) and the sql scripts to create and load data to the CUST_PREFS table.  Once you load the report into the catalog, you'll need to create the OE data connection and point the data model at it. You'll probably need to re-point the report to the data model too. Happy Bursting!

    Read the article

  • Tutorial for configuring OpenVPN [on hold]

    - by user2699451
    I have been through 10+ tutorials on setting up a OpenVPN, and each tutorial gives a different problem... Does anyone know of a decent and helpful website/tutorial which I could go to to get it set up? I have been battling through it for almost 2 months now. Yes, I have also bugged forums.openvpn, but I think I have "reached my post limit" with them. I have to configure it remotely via ssh. UPDATE: okay, I have been asked to be more clear on the topic I followed this tutorial (as a example) - http://www.servermom.com/how-to-build-openvpn-server-on-centos-6-x/732/ I had no issues setting up, etc. except when I boot into windows and run the OpenVPN GUI Client, it connects and gives this error: WARNING: Bad encapsulated packet length from peer (21331), which must be 0 and <= 1576 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition could also indicate a possible active attack on the TCP link -- [Attemping restart...] Here is my server config: port 1194 #- port proto udp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 reneg-sec 0 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /usr/lib64/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Co$ #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment$ client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 5 30 comp-lzo persist-key persist-tun status 1194.log verb 3 and my client config: client dev tun proto udp remote [server ip] 1194 # - Your server IP and OpenVPN Port resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key persist-tun ca ca.crt auth-user-pass comp-lzo reneg-sec 0 verb 3 OpenVPN Client Log: Thu Oct 31 11:51:29 2013 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Thu Oct 31 11:51:44 2013 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Thu Oct 31 11:51:44 2013 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Thu Oct 31 11:51:44 2013 LZO compression initialized Thu Oct 31 11:51:44 2013 Control Channel MTU parms [ L:1576 D:140 EF:40 EB:0 ET:0 EL:0 ] Thu Oct 31 11:51:44 2013 Data Channel MTU parms [ L:1576 D:1450 EF:44 EB:135 ET:32 EL:0 AF:3/1 ] Thu Oct 31 11:51:44 2013 Local Options hash (VER=V4): '2547efd2' Thu Oct 31 11:51:44 2013 Expected Remote Options hash (VER=V4): '77cf0943' Thu Oct 31 11:51:44 2013 Attempting to establish TCP connection with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCP connection established with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link local: [undef] Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link remote: x.x.x.x:1194 // after this it just hangs, nothing happens So I dont know what I am doing wrong but I am getting a bit impatient and on each forum I post this, I get stupid/unrelated/unhelpful answers...

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Arch Linux with an nginx/django setup refuses to display ANYTHING

    - by Holland
    I'm on Amazon Ec2, with an Arch Linux server. While I truly am loving it, I'm having the issue of actually getting nginx to display anything. Everytime I try to throw my hostname into the browser, the browser states that it's not available for some reason - almost as if the host doesn't even exist. One thing I'd like to know is, how can I get this up and running? Is there a specific arch linux configuration I have to do to make it web accessible? I have port 80 open, as well as port 22. I've tried using gunicorn, python-flup, and nginx. Nginx Config user http; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name _; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; #charset koi8-r; location ^~ /media/ { root /path/to/media; } location ^~ /admin-media/ { root /usr/lib/python2.7/site-packages/django/contrib/admin/media; } location / { root /path/to/root/; fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_index index.html; index index.htm index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /etc/nginx/html/50x.html; } } # server { # listen 80; # server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; # location / { # root html; # index index.html index.htm; # } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { root html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} #} # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I can't quite tell if it's a server issue or a configuration issue: I've followed so many guides now I can't even count them all. The thing is that Django itself is working fine, and my permissions to the document root of the where the site files are stored is 777. Ontop of that, I have a git repo which works perfectly fine, and django, python, and runfcgi all start without issues. The same goes for gunicorn, when I do a gunicorn_django -b 0.0.0.0:8000 in my document root. Here is my output from that: 2012-04-15 05:17:37 [3124] [INFO] Starting gunicorn 0.14.2 2012-04-15 05:17:37 [3124] [INFO] Listening at: http://0.0.0.0:8081 (3124) 2012-04-15 05:17:37 [3124] [INFO] Using worker: sync 2012-04-15 05:17:37 [3127] [INFO] Booting worker with pid: 3127 As far as I know, everything seems fine, as well as error.log and access.log for nginx. The access log is completely blank, for that matter. I just feel lost here; what would be a step in the right direction to bebugging an issue such as this?

    Read the article

  • PHP Mail() to Gmail = Spam

    - by grantw
    Recently Gmail has started marking emails sent directly from my server (using php mail()) as spam and I'm having problems trying to find the issue. If I send an exact copy of the same email from my email client it goes to the Gmail inbox. The emails are plain text, around 7 lines long and contain a URL link in plain text. As the emails sent from my client are getting through fine I'm thinking that the content isn't the issue. It would be greatly appreciated if someone could take a look at the the following headers and give me some advice why the email from the server is being marked as spam. Email from Server: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101784qeb; Thu, 15 Nov 2012 14:58:52 -0800 (PST) Received: by 10.60.27.166 with SMTP id u6mr2296595oeg.86.1353020331940; Thu, 15 Nov 2012 14:58:51 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id df4si17005013obc.50.2012.11.15.14.58.51 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:58:51 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Date:Message-Id:Content-Type:Reply-to:From:Subject:To; bh=2RJ9jsEaGcdcgJ1HMJgQG8QNvWevySWXIFRDqdY7EAM=; b=mGebBVOkyUhv94ONL3EabXeTgVznsT1VAwPdVvpOGDdjBtN1FabnuFi8sWbf5KEg5BUJ/h8fQ+9/2nrj+jbtoVLvKXI6L53HOXPjl7atCX9e41GkrOTAPw5ZFp+1lDbZ; Received: from grantw by dom.domainbrokerage.co.uk with local (Exim 4.80) (envelope-from [email protected]) id 1TZ8OZ-0008qC-Gy for [email protected]; Thu, 15 Nov 2012 22:58:51 +0000 To: [email protected] Subject: Offer Accepted X-PHP-Script: www.domainbrokerage.co.uk/admin.php for 95.172.231.27 From: My Name [email protected] Reply-to: [email protected] Content-Type: text/plain; charset=Windows-1251 Message-Id: [email protected] Date: Thu, 15 Nov 2012 22:58:51 +0000 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [500 500] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: grantw/from_h Email from client: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101495qeb; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received: by 10.182.197.8 with SMTP id iq8mr2351185obc.66.1353020089244; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id ab5si17000486obc.44.2012.11.15.14.54.48 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Content-Transfer-Encoding:Content-Type:Subject:To:MIME-Version:From:Date:Message-ID; bh=bKNjm+yTFZQ7HUjO3lKPp9HosUBfFxv9+oqV+NuIkdU=; b=j0T2XNBuENSFG85QWeRdJ2MUgW2BvGROBNL3zvjwOLoFeyHRU3B4M+lt6m1X+OLHfJJqcoR0+GS9p/TWn4jylKCF13xozAOc6ewZ3/4Xj/YUDXuHkzmCMiNxVcGETD7l; Received: from w-27.cust-7941.ip.static.uno.uk.net ([95.172.231.27]:1450 helo=[127.0.0.1]) by dom.domainbrokerage.co.uk with esmtpa (Exim 4.80) (envelope-from [email protected]) id 1TZ8Ke-0001XH-7p for [email protected]; Thu, 15 Nov 2012 22:54:48 +0000 Message-ID: [email protected] Date: Thu, 15 Nov 2012 22:54:50 +0000 From: My Name [email protected] User-Agent: Postbox 3.0.6 (Windows/20121031) MIME-Version: 1.0 To: [email protected] Subject: Offer Accepted Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: [email protected]

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • top tweets WebLogic Partner Community – June 2013

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity. Please feel free to send us your news! Lucas Jellema ?Getting started with Java EE 7: The Tutorial http://docs.oracle.com/javaee/7/tutorial/doc/home.htm … Simon Haslam I'm looking forward to starting a "WLS on ODA" proof of concept - some ideas for testing: http://www.veriton.co.uk/roller/fmw/entry/virtualised_oda_proof_of_concept … Frank Munz ?It's not too late - I just submitted two presentations about #OracleWebLogic and #Coherence for the @DOAGeV conference in Nürnberg. Did you? Arun Gupta ?Tyrus 1.0 User Guide: https://tyrus.java.net/documentation/1.0/user-guide.html … #WebSocket #JavaEE7 #GlassFish Arun Gupta #JavaEE7 Launch Webinar Technical Breakout replays on Youtube: http://bit.ly/12uUicT JSON 1.0 , EJB .2, Batch 1.0 more coming! OracleBlogs ?FREE Virtual Developer Day: Java SE, Java EE, Java Emebedded on Jun 19th and 25th http://ow.ly/2xBkwV Markus Eisele #Oracle #JavaSE Critical Patch Update Pre-Release Announcement - June 2013 http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html … #security OracleSupport_WLS ?Simple Custom #JMX MBeans with #WebLogic 12c and #Spring http://pub.vitrue.com/3kEr Oracle Technet Building Java HTML5/WebSocket Applications with JSR 356 - 4pm - Grand Ballroom Salon A/B #qconnewyork WebLogic Community Oracle Fusion Middleware (OFM) 11g (11.1.1.7) Starter Kit available & Customizable Demos http://wp.me/p1LMIb-BK Oracle Technet #Java EE 7: Moving Java Forward for the Enterprise | @java http://pub.vitrue.com/tHiM OTNArchBeat ?Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project | @AndrejusB http://pub.vitrue.com/lZPR WebLogic Community ?ExaLogic In Memory Applications & Whitepapers Building Large Scale E-Commerce Platforms & Rethink the Entire Application Lifecycle… WebLogic Community ?Coherence YouTube videos http://wp.me/p1LMIb-BG Arun Gupta ?WARNING: Next 2 days are going to be loaded with #JavaEE7 launch related tweets, and offline next week! JDeveloper & ADF Using Contextual Event in Oracle ADF http://dlvr.it/3Vpybr Oracle WebLogic Check out new blog on #hybrid_cloud & why choice is important http://bit.ly/1b1QGhL Andrejus Baranovskis Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project http://fb.me/1M9iWNmAw WebLogic Community WebLogic on Oracle Database Appliance by Frances Zhao http://wp.me/p1LMIb-BE OTNArchBeat ?New: A-Team Chronicles >> A great resource for technical content covering Oracle Fusion Middleware / Fusion Apps http://pub.vitrue.com/qbzS Oracle for Partners ?Take Java To The Edge: Java Virtual Developer Day – June 19 & June 25 http://bit.ly/19fGlSX Adam Bien ?Looking forward to tomorrow's #javaee7 + #angularjs #html5 marriage at #jpoint. See you there: http://www.jpoint.nl/meetingpoint/editie-2013#sessie-1 … shay shmeltzer ?There is a new patch for the #Oracle #ADF Mobile extension - use help->check for updates to get it. Frank Munz ?Not using @OracleWebLogic 12c yet? Australia does! Reviews from my @AUSOUG workshops in Brisbane, Adelaide and Perth. http://goo.gl/BfVc4 Arun Gupta ?WebSocket, Server-Sent Events, #JavaEE7 sessions accepted at #jaxlondon ... that's gonna be at least third trip to London this year! WebLogic Community SPARC T5-8 Delivers Best Single System SPECjEnterprise2010 Benchmark running WebLogic 12c http://wp.me/p1LMIb-BC WebLogic Community The Ultimate Java EE Event - 16 Power Workshops mit allen wichtigen Java-EE-Themen http://wp.me/p1LMIb-BY Oracle WebLogic ?@OracleWebLogic 7 Jun New Blog Post: Using try-with-resources with JDBC objects http://ow.ly/2xryb5 JDeveloper & ADF Switching Lists of Values http://dlvr.it/3PbCkw WebLogic Community ?YouTube channel Learning Oracle's ADF http://wp.me/p1LMIb-zA Markus Eisele [GER] RT @heisedc: #Java-Entwicklung in #Oracles Public #Cloud http://heise.de/-1866388/ftw OracleBlogs ?Coherence Incubator & Community Source Code & Release Documentation http://ow.ly/2x2fXK chriscmuir ?New blog post: Migrating ADF Mobile apps from 1.0 to 1.1 https://blogs.oracle.com/onesizedoesntfitall/entry/migrating_adf_mobile_apps_from … JDeveloper & ADF ?ADF JavaScript Partitioning for Performance http://dlvr.it/3Trw15 WebLogic Community WebLogic Server Security Workshop June 27th 2013 Germany http://wp.me/p1LMIb-C7 WebLogic Community Oracle Optimized Solution for WebLogic Server 12c http://wp.me/p1LMIb-BA WebLogic Community Virtualize and Run Your Forms Applications in the Cloud - Now On Demand http://wp.me/p1LMIb-By Lucas Jellema Innteresting presentation on various aspects of end user assistance in Fusion Applications (ADF based): http://www.slideshare.net/uobroin/ouag-ireland-final2012slideshare … Adam Bien ?Summer Of JavaEE Workshops And Gigs: Free Hacking night:11.06.2013, Utrecht JavaEE 7 Meets HTML 5 and AngularJ... http://bit.ly/11XRjt4 WebLogic Community ?Real World ADF Design & Architecture Principles Trainings Germany, Poland & Portugal http://wp.me/p1LMIb-Bw Oracle for Partners ?JAVA Virtual Developer Day – June 19 & June 25 - Watch educational content and engage with Oracle experts online https://oracle.6connex.com/portal/java2013/login/?langR=en_US&mcc=OPNNSL … Markus Eisele ?[blog] Java EE 7 is final. Thoughts, Insights and further Pointers. http://dlvr.it/3SrxnB #javaee7 WebLogic Community Oracle takes the top spot for market share in the Application Server Market Segment for 2012 http://wp.me/p1LMIb-Bu OTNArchBeat ?Oracle ACE Director @LucasJellema is "very pleasantly surprised" with the new ADF Academy. http://pub.vitrue.com/8fad chriscmuir ?Sell out crowd for our ADF architecture course in Munich #adfarch pic.twitter.com/zhNtQJ25JV Markus Eisele ?[blog] New German Article: Java 7 Update 21 Security Improvements http://dlvr.it/3Sc8V9 #java #heise #security Markus Eisele ?[blog] New German Article: Oracle Java Cloud Service http://dlvr.it/3Sc20V #java #heise #OracleCloud OracleSupport_WLS ?Troubleshooting and Tuning with #WebLogic - Developer Webcast now available on #Youtube http://pub.vitrue.com/GSOy Andrejus Baranovskis New ADF Academy - Impressive Concept for ADF eLearning http://fb.me/2kYSMKKR5 OracleSupport_WLS ?Removing a #weblogic domain properly http://pub.vitrue.com/ZndM WebLogic Community WebLogic Partner Community Newsletter May 2013 http://wp.me/p1LMIb-Bp Oracle WebLogic ?Blog: Troubleshooting tools Part 3- Heap Dumps #Oracle #WebLogic Read the series http://bit.ly/14CQSD2 Oracle WebLogic ?Blog: #WebLogic_Server on #Oracle_Database_Appliance- How to conjure a WebLogic cluster- http://bit.ly/11fciHA Oracle WebLogic ?Check out new cool features in Oracle Traffic Director- http://bit.ly/11fbz9h WebLogic Community Additional new material WebLogic Community April 2013 http://wp.me/p1LMIb-zM WebLogic Community New WebLogic references - we want yours http://wp.me/p1LMIb-zK OracleSupport_WLS ?#Weblogic Session Replication jsession ID and F5 http://pub.vitrue.com/dWZp OracleBlogs ?top tweets WebLogic Partner Community May 2013 http://ow.ly/2xc8M5 WebLogic Community Welcome to the Spring edition of Oracle Scene http://wp.me/p1LMIb-zE Andreas Koop ?[blog post] ADF: Static Values View Object does not show any values (solved) http://bit.ly/14RDZ8p OracleBlogs ?ADF Mobile - accessing the SQLite database http://ow.ly/2x85r0 OracleSupport_WLS Youtube channel- Troubleshooting and Tuning with #WebLogic.#JRockit #SOAP #JRF http://pub.vitrue.com/qMxu Arun Gupta Next Java Magazine is all about #JavaEE7...productivity, HTML5, WebSocket, Batch & more. Subscribe http://ow.ly/lkD5D (@Oraclejavamag) Oracle WebLogic How to configure a #WebLogic cluster on #Oracle_Database_Appliance? It’s easy, read how. http://bit.ly/11fciHA Oracle WebLogic ?Blog: How to use Heap Dumps to troubleshooting memory leaks- #Oracle #WebLogic_Server http://bit.ly/14CQSD2 OracleBlogs ?Over 100 Images To Be Added to NetBeans Platform Showcase http://ow.ly/2x7Fvp Lucas Jellema A new release of the ADF EMG Task Flow Tester is now available for both JDeveloper 11 R1 and R2. https://java.net/projects/adf-task-flow-tester/pages/GettingStarted … WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Working with Reporting Services Filters–Part 1

    - by smisner
    There are two ways that you can filter data in Reporting Services. The first way, which usually provides a faster performance, is to use query parameters to apply a filter using the WHERE clause in a SQL statement. In that case, the structure of the filter depends upon the syntax recognized by the source database. Another way to filter data in Reporting Services is to apply a filter to a dataset, data region, or a group. Using this latter method, you can even apply multiple filters. However, the use of filter operators or the setup of multiple filters is not always obvious, so in this series of posts, I'll provide some more information about the configuration of filters. First, why not use query parameters exclusively for filtering? Here are a few reasons: You might want to apply a filter to part of the report, but not all of the report. Your dataset might retrieve data from a stored procedure, and doesn't allow you to pass a query parameter for filtering purposes. Your report might be set up as a snapshot on the report server and, in that case, cannot be dynamically filtered based on a query parameter. Next, let's look at how to set up a report filter in general. The process is the same whether you are applying the filter to a dataset, data region, or a group. When you go to the Filters page in the Properties dialog box for whichever of these items you selected (dataset, data region, group), you click the Add button to create a new filter. The interface looks like this: The Expression field is usually a field in the dataset, so to make it easier for you to make a selection,the drop-down list displays all of the current dataset fields. But notice the expression button to the right, which means that you can set up any type of expression-not just a dataset field. To the right of the expression button, you'll find a data type drop-down list. It's important to specify the correct data type for the field or expression you're using. Now for the operators. Here's a list of the options that you have: This Operator Performs This Action =, <>, >, >=, <, <=, Like Compares expression to value Top N, Bottom N Compares expression to Top (Bottom) set of N values (N = integer) Top %, Bottom % Compares expression to Top (Bottom) N percent of values (N = integer or float) Between Determines whether expression is between two values, inclusive In Determines whether expression is found in list of values Last, the Value is what you're comparing to the expression using the operator. The construction of a filter using some operators (=, <>, >, etc.) is fairly simple. If my dataset (for AdventureWorks data) has a Category field, and I have a parameter that prompts the user for a single category, I can set up a filter like this: Expression Data Type Operator Value [Category] Text = [@Category] But if I set the parameter to accept multiple values, I need to change the operator from = to In, just as I would have to do if I were using a query parameter. The parameter expression, [@Category], which translates to =Parameters!Category.Value, doesn’t need to change because it represents an array as soon as I change the parameter to allow multiple values. The “In” operator requires an array. With that in mind, let’s consider a variation on Value. Let’s say that I have a parameter that prompts the user for a particular year – and for simplicity’s sake, this parameter only allows a single value, and I have an expression that evaluates the previous year based on the user’s selection. Then I want to use these two values in two separate filters with an OR condition. That is, I want to filter either by the year selected OR by the year that was computed. If I create two filters, one for each year (as shown below), then the report will only display results if BOTH filter conditions are met – which would never be true. Expression Data Type Operator Value [CalendarYear] Integer = [@Year] [CalendarYear] Integer = =Parameters!Year.Value-1 To handle this scenario, we need to create a single filter that uses the “In” operator, and then set up the Value expression as an array. To create an array, we use the Split function after creating a string that concatenates the two values (highlighted in yellow) as shown below. Expression Data Type Operator Value =Cstr(Fields!CalendarYear.Value) Text In =Split( CStr(Parameters!Year.Value) + ”,” + CStr(Parameters!Year.Value-1) , “,”) Note that in this case, I had to apply a string conversion on the year integer so that I could concatenate the parameter selection with the calculated year. Pay attention to the second argument of the Split function—you must use a comma delimiter for the result to work correctly with the In operator. I also had to change the Expression value from [CalendarYear] (or =Fields!CalendarYear.Value) so that the expression would return a string that I could compare with the values in the string array. More fun with filter expressions in future posts!

    Read the article

  • How can Windows XP/7 users cleanly connect to Mac OS X Server 10.9.4 Mavericks with Active Directory integration?

    - by JakeGould
    I’m a Linux/Unix systems admin who also manages a Macintosh server infrastructure & there is a lone Mac Mini in the mix running 10.9.4 that I would like Windows XP & Windows 7 users to connect to with little or no hassle. The problem? Windows users can’t seem to even get to the point of a password prompt yet connect. Mind you this server replaced a Mac OS X 10.6.8 server that had issues, but never had issues with Windows users connected. The gist of this post is: The tons of different messages out there about Mac OS X 10.9.4 Samba support are mind-numbingly confusing. Can anyone share some solid specifics here? I’ve read pieces like this one here that suggest turning off file sharing & then adding a share with AFP/SMB enabled would work. But the suggestion seems to apply to 10.8. And from what I know a lot has changed in Samba support in 10.9 let alone the iterations to 10.9.4. Then I found this great tutorial here that explains things step-by-step. Which seems like it should work, but the problem is the example given applies to a local user created on the Mac when I would like users in an Active Directory group—which the Mac is bound to—access the Mac Mini shares. There are also tons of great tips here on MacWindows.com but nothing seems solid to the issue I am facing. So from what I am reading these are my options: Local User Versus Active Directory: Setup a common local user on the Mac OS X 10.9.4 server to be used for Samba sharing since Active Directory won’t work. Is this really the case? Because loss of AD integration is a major pain. Do Extended File Attributes Get Retained from Windows Users: If this were to work, how do extended attributes come into play? Loss of metadata & related info is not an option. How Fragile is Any of this to Updates: How does any of this shake out with Mac OS X updates as well as Windows updates? Installing Official, Open Source Samba: Would upgrading the Samba install on the server to the official open source Samba via a package like SMBUp or via the Hombrew method described here help or make the issue worse? I fully understand there have historically been issues in mixed environments, but nowadays Windows users connecting to a Mac seem to have a truly hellish road ahead of them. Unless I am missing something?

    Read the article

  • CodePlex Daily Summary for Sunday, March 06, 2011

    CodePlex Daily Summary for Sunday, March 06, 2011Popular ReleasesIIS Tuner: IIS Tuner 1.0: IIS and ASP.NET performance optimization toolMinemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.CRM 2011 OData Query Designer: CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. This tool allows you to build OData queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the query and view the ATOM and JSON data returned. The look and feel of this component will improve and new functionality will be added in the near future so please provide feedback on your experience. Import this solution int...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...DotNetNuke® Community Edition: 05.06.02 Beta: Major HighlightsFixed issue where "My Folder" was not available in the URL control and the Telerik HTML Editor Fixed issue where HTML Editor dialogs were not displaying correctly in alternate languages Fixed issue with Regex for email validation Fixed race condition in the core scheduler Fixed issue where editing Host page settings would result in broken host menu Fixed issue where "Apply to All Modules" setting was not propogating settings correctly. Fixed issue where browser lan...DirectQ: Release 1.8.7 (RC1): Release candidate 1 of 1.8.7GoogleTrail: TrailMap Beta 1: Trailmap beta 1 release Now we have updated custom map builder. Now we have complete gpx file editor. Now we have elevation data update service for any gpx file. (currently supports only google only).Chirpy - VS Add In For Handling Js, Css, DotLess, and T4 Files: Margogype Chirpy (ver 2.0): Chirpy loves Americans. Chirpy hates Americanos.ASP.NET: Sprite and Image Optimization Preview 3: The ASP.NET Sprite and Image Optimization framework is designed to decrease the amount of time required to request and display a page from a web server by performing a variety of optimizations on the page’s images. This is the third preview of the feature and works with ASP.NET Web Forms 4, ASP.NET MVC 3, and ASP.NET Web Pages (Razor) projects. The binaries are also available via NuGet: AspNetSprites-Core AspNetSprites-WebFormsControl AspNetSprites-MvcAndRazorHelper It includes the foll...Sandcastle Help File Builder: SHFB v1.9.2.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. NOTE: The included help file and the online help have not been completely updated to reflect all changes in this release. A refresh will be issue...Network Monitor Open Source Parsers: Microsoft Network Monitor Parsers 3.4.2554: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, we have added 4 new protocol parsers and updated 79 existing parsers in the NetworkMonitor_Pa...Image Resizer for Windows: Image Resizer 3 Preview 1: Prepare to have your minds blown. This is the first preview of what will eventually become 39613. There are still a lot of rough edges and plenty of areas still under construction, but for your basic needs, it should be relativly stable. Note: You will need the .NET Framework 4 installed to use this version. Below is a status report of where this release is in terms of the overall goal for version 3. If you're feeling a bit technically ambitious and want to check out some of the features th...JSON Toolkit: JSON Toolkit 1.1: updated GetAllJsonObjects() method and GetAllProperties() methods to JsonObject and Properties propertiesFacebook Graph Toolkit: Facebook Graph Toolkit 1.0: Refer to http://computerbeacon.net for Documentation and Tutorial New features:added FQL support added Expires property to Api object added support for publishing to a user's friend / Facebook Page added support for posting and removing comments on posts added support for adding and removing likes on posts and comments added static methods for Page class added support for Iframe Application Tab of Facebook Page added support for obtaining the user's country, locale and age in If...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager small improvements for some helpers and AjaxDropdown has Data like the Lookup except it's value gets reset and list refilled if any element from data gets changedManaged Extensibility Framework: MEF 2 Preview 3: This release aims .net 4.0 and Silverlight 4.0. Accordingly, there are two solutions files. The assemblies are named System.ComponentModel.Composition.Codeplex.dll as a way to avoid clashing with the version shipped with the 4th version of the framework. Introduced CompositionOptions to container instantiation CompositionOptions.DisableSilentRejection makes MEF throw an exception on composition errors. Useful for diagnostics Support for open generics Support for attribute-less registr...PHPExcel: PHPExcel 1.7.6 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.New Projectsasp.net mvc 3 simple cms: asp.net mvc3 cms for learling purpose.C++ Mini Framework: C++ Mini Framework is a simple and easy to use class library in source format to quickly do things you commonly need to do in native projects with the purpose to get you started specifically targeting new C++ developers hopping you will be productive from the very start.Community Megaphone Helpers: Community Megaphone Helpers is a project intended as a means of sharing and accepting contributions for reusable Razor helper modules for functionality used in the Community Megaphone events web application, including Bing maps and more. Supports Microsoft WebMatrix & MVC 3CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. The tool allows you to build OData REST queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the queryFaceted Search: Implementation of faceted (advanced) search with composite client side UI. Abstraction interfaces for intagration with different server side technologies, implementation for ASP.NET MVC. FileRenamePro: FileRenamePro makes it easier for users to rename files using advanced rules and regular expressions. It's developed in C#.GT5 Mobile: Gran-Turismo Remote Racing mobile site wrapper.KAT: KAT - Knowledge Assessment Tool is a Solution from IndERP in order to automate Performance and Process Management for a Technology/Job Oriented Companies. monopoly game: Monopoly is an open source project for educational purposes. The project will incorporate XNA, Silverlight, WCF technologies. The project will also show good design patterns considerations, and integration into Facebook App. The project will be written in C#.MuDB: MuDB is an embedded object-oriented database for the .NET Micro Framework which provides a simple yet useful interface for managing data.NDataStructure: A library providing a handful of useful data structures omitted from the .NET framework.Set NuGet version number: A simple command line tool that makes it easy to set the version number within a NuGet .nuspec package configuration file. This is useful for when you want to automatically update and publish a NuGet package from your build system.Sightreader: Small application to aid in the wrote learning of basic musical notation.SSAS Operations Templates: SSAS Operations Templates includes SSIS packages, scripts and code samples for automating maintenance of SSAS in a production environment. Includes operations such as backup the current state of cube designs in production, scripting paritition creation, etc.Team Run Log: Team Run LogTEDHelper: Download TED movie's subtitle. ?? TED ?????。testprojectit339: project339Ultimate Resume Repository: A class library and application for storing resumes of multiple people with the ability to export a targeted resume in various, configurable formats. Further additions may include cover letters, browser add-ons to populate applications, job search engine integration, etc...Umbraco: Inspired DataTypes: New datatypes that are not in the default install to make Umbraco have some new controls such as Content/Media Treeview, Content/Media Drop Down List with Treeview. Controls have options to restrict DocumentTypes or MediaType and the start location to retreive fromUsing different schemas in the same Orchestration Receive Port: Using different schemas in the same Orchestration Receive PortWF4Host: Examples in re-hosting Workflow 4 designer.WMP Hotkeys: WMP Hotkeys is a windows media player plugin that enable users to use VLC player like keyboard shortcuts(e.g SPACE to play/pause) in Windows Media Player.

    Read the article

  • ASP.NET MVC3 checkbox dropdownlist create [migrated]

    - by user95381
    i'm new in asp.net MVC and I/m use view model to poppulate the dropdown list and group of checkboxes. I use SQL Server 2012, where have many to many relationships between Students - Books; Student - Cities. I need collect StudentName, one city and many books for one student. I have next questions: 1. How can I get the values from database to my StudentBookCityViewModel? 2. How can I save the values to my database in [HttpPost] Create method? Here is the code: MODEL public class Student { public int StudentId { get; set; } public string StudentName { get; set; } public ICollection<Book> Books { get; set; } public ICollection<City> Cities { get; set; } } public class Book { public int BookId { get; set; } public string BookName { get; set; } public bool IsSelected { get; set; } public ICollection<Student> Students { get; set; } } public class City { public int CityId { get; set; } public string CityName { get; set; } public bool IsSelected { get; set; } public ICollection<Student> Students { get; set; } } VIEW MODEL public class StudentBookCityViewModel { public string StudentName { get; set; } public IList<Book> Books { get; set; } public StudentBookCityViewModel() { Books = new[] { new Book {BookName = "Title1", IsSelected = false}, new Book {BookName = "Title2", IsSelected = false}, new Book {BookName = "Title3", IsSelected = false} }.ToList(); } public string City { get; set; } public IEnumerable<SelectListItem> CityValues { get { return new[] { new SelectListItem {Value = "Value1", Text = "Text1"}, new SelectListItem {Value = "Value2", Text = "Text2"}, new SelectListItem {Value = "Value3", Text = "Text3"} }; } } } Context public class EFDbContext : DbContext{ public EFDbContext(string connectionString) { Database.Connection.ConnectionString = connectionString; } public DbSet<Book> Books { get; set; } public DbSet<Student> Students { get; set; } public DbSet<City> Cities { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Book>() .HasMany(x => x.Students).WithMany(x => x.Books) .Map(x => x.MapLeftKey("BookId").MapRightKey("StudentId").ToTable("StudentBooks")); modelBuilder.Entity<City>() .HasMany(x => x.Students).WithMany(x => x.Cities) .Map(x => x.MapLeftKey("CityId").MapRightKey("StudentId").ToTable("StudentCities")); } } Controller public ActionResult Create() { return View(); } [HttpPost] public ActionResult Create() { //I don't understand how I can save values to db context.SaveChanges(); return RedirectToAction("Index"); } View @model UsingEFNew.ViewModels.StudentBookCityViewModel @using (Html.BeginForm()) { Your Name: @Html.TextBoxFor(model = model.StudentName) <div>Genre:</div> <div> @Html.DropDownListFor(model => model.City, Model.CityValues) </div> <div>Books:</div> <div> @for (int i = 0; i < Model.Books.Count; i++) { <div> @Html.HiddenFor(x => x.Books[i].BookId) @Html.CheckBoxFor(x => x.Books[i].IsSelected) @Html.LabelFor(x => x.Books[i].IsSelected, Model.Books[i].BookName) </div> } </div> <div> <input id="btnSubmit" type="submit" value="Submit" /> </div> </div> }

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Registration is open - JD Edwards Summit in Dubai

    - by Hartmut Wiese
    Dear all, the registration is now open for the 2nd ECEMEA JD Edwards Summit in Dubai. The event is taking place from NOV 17-21. Please see Agenda details and registration links on those two pages:  Partner and Employee Registration Page:eventreg.oracle.com/profile/web/index.cfm?PKWebId=0x285012625 Customer Registration Page:http://eventreg.oracle.com/profile/web/index.cfm?PKWebId=0x285012625 Partner have to pay a fee and with the registration each partner confirms to do his/her payment. Only accepted method of payment is through PayPal. You will receive a separate email after registration with additional details. Prices are the following: - NOV 17-18: USD 100 per Partner registration - NOV 20-21: USD 100 per Partner and Customer registration (NOV 19 is free of charge, Partner Sponsors can register up to 4 people free of charge for the whole event) After the registration you receive an automatic workflow message which is not the registration confirmation. We first have to check the capacity and once you are approved you will receive a separate email with your registration confirmation. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Speciality this year: An invitation letter can be created for Employees only. The fastest way for Customers/Partners to get a Visa is talking to your hotel or airline. This is an established process within this region Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One workshop is pretty interesting for “JDE in a box” Partner. One standard training (Introduction to Oracle Solaris V.11) was used and we have added some specific content about how to create a “JDE in a box” solution for the X3-2 / T4-1 combination. A “JDE in a box” solution is a Partner Go-to-Market solution where Oracle is helping each partners in identifying the components to use and where we also want to leverage our experiences and help our Partner to successfully combine this to a Partner offering.Target audience are Partner with no or limited Solaris/Sparc knowledge.This is the first version of this training and we all will learn from this experiences. I hope to see a lot of JDEdwards interested people in Dubai during the five days of the event.

    Read the article

  • Ubuntu 12.04 + Raid0 + Windows 7 not loading

    - by Douglas
    please someone help me.... (Sorry for my english) Hi, I have a Pc with 2 Hd (1Tb each) on Raid0. I had a Windows 7 64bits working for several months. When I installed the Windows I let a 100Gb partition empty to install Ubuntu someday. I was using Linux on a Virtualbox, but this week I tried to install Ubuntu 12.04 in this 100Gb partition. I used the Ubuntu alternate cd, because the 'normal' cd was giving me trouble with the Raid0. The grub installation always reported a error. After a lot of work I found that I nedded to install grub on partition /dev/mapper/isw_chjbfeec_DougRaid1 (see Bootinfo below). The Windows installation created a 100Mb boot partition, so I needed to install grub in this partition. Now I have the Ubuntu working 100% ok. The problem is, the Windows is not booting! The windows option is present on the grub menu, but when I choose the windows option there is a black screen and after that the grub is reloaded. My Bootinfo is: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for /boot/grub. => Grub2 (v1.99) is installed in the MBR of /dev/mapper/isw_chjbfeec_DougRaid and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for /boot/grub. sda1: __________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' sda2: __________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' mount: unknown filesystem type '' sda3: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: isw_chjbfeec_DougRaid1: ________________________________________________________ File system: ntfs Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of isw_chjbfeec_DougRaid1 and looks at sector 3841862992 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. No errors found in the Boot Parameter Block. Operating System: Boot files: /grldr /bootmgr /Boot/BCD /grldr isw_chjbfeec_DougRaid2: ________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe isw_chjbfeec_DougRaid3: ________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: isw_chjbfeec_DougRaid5: ________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img isw_chjbfeec_DougRaid6: ________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/sda2 206,848 3,686,402,047 3,686,195,200 7 NTFS / exFAT / HPFS /dev/sda3 3,686,402,558 3,907,039,743 220,637,186 5 Extended Invalid MBR Signature found. EBR refers to a location outside the hard drive. /dev/sda2 ends after the last sector of /dev/sda /dev/sda3 ends after the last sector of /dev/sda Drive: isw_chjbfeec_DougRaid _____________________________________________________________________ Disk /dev/mapper/isw_chjbfeec_DougRaid: 2000.4 GB, 2000404348928 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907039744 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/mapper/isw_chjbfeec_DougRaid1 * 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/mapper/isw_chjbfeec_DougRaid2 206,848 3,686,402,047 3,686,195,200 7 NTFS / exFAT / HPFS /dev/mapper/isw_chjbfeec_DougRaid3 3,686,402,558 3,907,039,743 220,637,186 5 Extended /dev/mapper/isw_chjbfeec_DougRaid5 3,686,402,560 3,881,876,479 195,473,920 83 Linux /dev/mapper/isw_chjbfeec_DougRaid6 3,881,876,992 3,907,039,743 25,162,752 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/mapper/isw_chjbfeec_DougRaid1 C89C73D19C73B910 ntfs Reservado pelo Sistema /dev/mapper/isw_chjbfeec_DougRaid2 6830883A3088116C ntfs /dev/mapper/isw_chjbfeec_DougRaid5 bbab868a-ea53-4be3-ba7d-2737fe6cb24c ext4 /dev/mapper/isw_chjbfeec_DougRaid6 7a830a3c-88fb-4cba-80dc-f32e08abfd5b swap /dev/sda isw_raid_member /dev/sdb isw_raid_member /dev/sr0 iso9660 Windows7x86x64SK ========================= "ls -R /dev/mapper/" output: ========================= /dev/mapper: control isw_chjbfeec_DougRaid isw_chjbfeec_DougRaid1 isw_chjbfeec_DougRaid2 isw_chjbfeec_DougRaid3 isw_chjbfeec_DougRaid5 isw_chjbfeec_DougRaid6 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/mapper/isw_chjbfeec_DougRaid5 / ext4 (rw,errors=remount-ro) /dev/sr0 /media/Windows7x86x64SK iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks) ================= isw_chjbfeec_DougRaid1/grldr embedded menu: ================== -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ================== isw_chjbfeec_DougRaid5/boot/grub/grub.cfg: ================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-24-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux /boot/vmlinuz-3.2.0-24-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic-pae } menuentry 'Ubuntu, with Linux 3.2.0-24-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c echo 'Loading Linux 3.2.0-24-generic-pae ...' linux /boot/vmlinuz-3.2.0-24-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic-pae } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.2.0-23-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-23-generic-pae } menuentry 'Ubuntu, with Linux 3.2.0-23-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c echo 'Loading Linux 3.2.0-23-generic-pae ...' linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-23-generic-pae } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober_proxy ### menuentry "Windows 7 (loader) (on /dev/mapper/isw_chjbfeec_DougRaid1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(sda,msdos1)' search --no-floppy --fs-uuid --set=root C89C73D19C73B910 chainloader +1 } ### END /etc/grub.d/30_os-prober_proxy ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- ====================== isw_chjbfeec_DougRaid5/etc/fstab: ======================= -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/isw_chjbfeec_DougRaid5 / ext4 errors=remount-ro 0 1 /dev/mapper/isw_chjbfeec_DougRaid6 none swap sw 0 0 -------------------------------------------------------------------------------- ========== isw_chjbfeec_DougRaid5: Location of files loaded by Grub: =========== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.2.0-23-generic-pae 2 = boot/initrd.img-3.2.0-24-generic-pae 2 = boot/vmlinuz-3.2.0-23-generic-pae 1 = boot/vmlinuz-3.2.0-24-generic-pae 1 = initrd.img 2 = initrd.img.old 2 = vmlinuz 1 = vmlinuz.old 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda1 Unknown BootLoader on sda2 Unknown BootLoader on sda3 =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt hexdump: /dev/sda1: No such file or directory hexdump: /dev/sda1: No such file or directory hexdump: /dev/sda2: No such file or directory hexdump: /dev/sda2: No such file or directory hexdump: /dev/sda3: No such file or directory hexdump: /dev/sda3: No such file or directory xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in How we can see the Windows part at grub is: menuentry "Windows 7 (loader) (on /dev/mapper/isw_chjbfeec_DougRaid1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(sda,msdos1)' search --no-floppy --fs-uuid --set=root C89C73D19C73B910 chainloader +1 } I tried a lot of combinations at the line: set root='(sda,msdos1)' , but no success I tried to change uuid to the /dev/mapper/isw_chjbfeec_DougRaid2 uuid, but the grub reports a error. I dont know what to do now. I really need to boot my windows partition. Someone knows what to do? Thanks........

    Read the article

  • 'Content' is NOT 'Text' in XAML

    - by psheriff
    One of the key concepts in XAML is that the Content property of a XAML control like a Button or ComboBoxItem does not have to contain just textual data. In fact, Content can be almost any other XAML that you want. To illustrate here is a simple example of how to spruce up your Button controls in Silverlight. Here is some very simple XAML that consists of two Button controls within a StackPanel on a Silverlight User Control. <StackPanel>  <Button Name="btnHome"          HorizontalAlignment="Left"          Content="Home" />  <Button Name="btnLog"          HorizontalAlignment="Left"          Content="Logs" /></StackPanel> The XAML listed above will produce a Silverlight control within a Browser that looks like Figure 1.   Figure 1: Normal button controls are quite boring. With just a little bit of refactoring to move the button attributes into Styles, we can make the buttons look a little better. I am a big believer in Styles, so I typically create a Resources section within my user control where I can factor out the common attribute settings for a particular set of controls. Here is a Resources section that I added to my Silverlight user control. <UserControl.Resources>  <Style TargetType="Button"         x:Key="NormalButton">    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="MinWidth"            Value="50" />    <Setter Property="Margin"            Value="10" />  </Style></UserControl.Resources> Now back in the XAML within the Grid control I update the Button controls to use the Style attribute and have each button use the Static Resource called NormalButton. <StackPanel>  <Button Name="btnHome"          Style="{StaticResource NormalButton}"          Content="Home" />  <Button Name="btnLog"          Style="{StaticResource NormalButton}"          Content="Logs" /></StackPanel> With the additional attributes set in the Resources section on the Button, the above XAML will now display the two buttons as shown in Figure 2. Figure 2: Use Resources to Make Buttons More Consistent Now let’s re-design these buttons even more. Instead of using words for each button, let’s replace the Content property to use a picture. As they say… a picture is worth a thousand words, so let’s take advantage of that. Modify each of the Button controls and eliminate the Content attribute and instead, insert an <Image> control with the <Button> and the </Button> tags. Add a ToolTip to still display the words you had before in the Content and you will now have better looking buttons, as shown in Figure 3.   Figure 3: Using pictures instead of words can be an effective method of communication The XAML to produce Figure 3 is shown in the following listing: <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Home.jpg" />  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Log.jpg" />  </Button></StackPanel> You will also need to add the following XAML to the User Control’s Resources section. <Style TargetType="Image"        x:Key="NormalImage">  <Setter Property="Width"          Value="50" /></Style> Add Multiple Controls to Content Now, since the Content can be whatever we want, you could also modify the Content of each button to be a StackPanel control. Then you can have an image and text within the button. <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Home.jpg" />      <TextBlock Text="Home"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Log.jpg" />      <TextBlock Text="Logs"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button></StackPanel> You will need to add one more resource for the TextBlock control too. <Style TargetType="TextBlock"        x:Key="NormalTextBlock">  <Setter Property="HorizontalAlignment"          Value="Center" /></Style> All of the above will now produce the following:   Figure 4: Add multiple controls to the content to make your buttons even more interesting. Summary While this is a simple example, you can see how XAML Content has great flexibility. You could add a MediaElement control as the content of a Button and play a video within the Button. Not that you would necessarily do this, but it does work. What is nice about adding different content within the Button control is you still get the Click event and other attributes of a button, but it does necessarily look like a normal button. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled "Silverlight XAML for the Complete Novice - Part 1."

    Read the article

  • UndoRedo on Nodes

    - by Geertjan
    When a change is made to the property in the Properties Window, below, the undo/redo functionality becomes enabled: When undo/redo are invoked, e.g., via the buttons in the toolbar, the display name of the node changes accordingly. The only problem I have is that the buttons only become enabled when the Person Window is selected, not when the Properties Window is selected, which would be desirable. Here's the Person object: public class Person implements PropertyChangeListener {     private String name;     public static final String PROP_NAME = "name";     public Person(String name) {         this.name = name;     }     public String getName() {         return name;     }     public void setName(String name) {         String oldName = this.name;         this.name = name;         propertyChangeSupport.firePropertyChange(PROP_NAME, oldName, name);     }     private transient final PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this);     public void addPropertyChangeListener(PropertyChangeListener listener) {         propertyChangeSupport.addPropertyChangeListener(listener);     }     public void removePropertyChangeListener(PropertyChangeListener listener) {         propertyChangeSupport.removePropertyChangeListener(listener);     }     @Override     public void propertyChange(PropertyChangeEvent evt) {         propertyChangeSupport.firePropertyChange(evt);     } } And here's the Node with UndoRedo enablement: public class PersonNode extends AbstractNode implements UndoRedo.Provider, PropertyChangeListener {     private UndoRedo.Manager manager = new UndoRedo.Manager();     private boolean undoRedoEvent;     public PersonNode(Person person) {         super(Children.LEAF, Lookups.singleton(person));         person.addPropertyChangeListener(this);         setDisplayName(person.getName());     }     @Override     protected Sheet createSheet() {         Sheet sheet = Sheet.createDefault();         Sheet.Set set = Sheet.createPropertiesSet();         set.put(new NameProperty(getLookup().lookup(Person.class)));         sheet.put(set);         return sheet;     }     @Override     public void propertyChange(PropertyChangeEvent evt) {         if (evt.getPropertyName().equals(Person.PROP_NAME)) {             firePropertyChange(evt.getPropertyName(), evt.getOldValue(), evt.getNewValue());         }     }     public void fireUndoableEvent(String property, Person source, Object oldValue, Object newValue) {         manager.addEdit(new MyAbstractUndoableEdit(source, oldValue, newValue));     }     @Override     public UndoRedo getUndoRedo() {         return manager;     }     @Override     public String getDisplayName() {         Person p = getLookup().lookup(Person.class);         if (p != null) {             return p.getName();         }         return super.getDisplayName();     }     private class NameProperty extends PropertySupport.ReadWrite<String> {         private Person p;         public NameProperty(Person p) {             super("name", String.class, "Name", "Name of Person");             this.p = p;         }         @Override         public String getValue() throws IllegalAccessException, InvocationTargetException {             return p.getName();         }         @Override         public void setValue(String newValue) throws IllegalAccessException, IllegalArgumentException, InvocationTargetException {             String oldValue = p.getName();             p.setName(newValue);             if (!undoRedoEvent) {                 fireUndoableEvent("name", p, oldValue, newValue);                 fireDisplayNameChange(oldValue, newValue);             }         }     }     class MyAbstractUndoableEdit extends AbstractUndoableEdit {         private final String oldValue;         private final String newValue;         private final Person source;         private MyAbstractUndoableEdit(Person source, Object oldValue, Object newValue) {             this.oldValue = oldValue.toString();             this.newValue = newValue.toString();             this.source = source;         }         @Override         public boolean canRedo() {             return true;         }         @Override         public boolean canUndo() {             return true;         }         @Override         public void undo() throws CannotUndoException {             undoRedoEvent = true;             source.setName(oldValue.toString());             fireDisplayNameChange(oldValue, newValue);             undoRedoEvent = false;         }         @Override         public void redo() throws CannotUndoException {             undoRedoEvent = true;             source.setName(newValue.toString());             fireDisplayNameChange(oldValue, newValue);             undoRedoEvent = false;         }     } } Does anyone out there know how to have the Undo/Redo functionality enabled when the Properties Window is selected?

    Read the article

  • CodePlex Daily Summary for Tuesday, June 18, 2013

    CodePlex Daily Summary for Tuesday, June 18, 2013Popular ReleasesCODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...LINQ Extensions Library: 1.0.4.2: New to release 1.0.4.2 Custom sorting extensions that perform up to 50% better than LINQ OrderBy, ThenBy extensions... Extensions allow for fine tuning of the sort by controlling the algorithm each sort uses.ExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.94: add dstLine and dstCol attributes to the -Analyze output in XML mode. un-combine leftover comma-separates expression statements after optimizations are complete so downstream tools don't stack-overflow on really deep comma trees. add support for using a single source map generator instance with multiple runs of MinifyJavaScript, assuming that the results are concatenated to the same output file.Kooboo CMS: Kooboo CMS 4.1.1: The stable release of Kooboo CMS 4.1.0 with fixed the following issues: https://github.com/Kooboo/CMS/issues/1 https://github.com/Kooboo/CMS/issues/11 https://github.com/Kooboo/CMS/issues/13 https://github.com/Kooboo/CMS/issues/15 https://github.com/Kooboo/CMS/issues/19 https://github.com/Kooboo/CMS/issues/20 https://github.com/Kooboo/CMS/issues/24 https://github.com/Kooboo/CMS/issues/43 https://github.com/Kooboo/CMS/issues/45 https://github.com/Kooboo/CMS/issues/46 https://github....VidCoder: 1.5.0 Beta: The betas have started up again! If you were previously on the beta track you will need to install this to get back on it. That's because you can now run both the Beta and Stable version of VidCoder side-by-side! Note that the OpenCL and Intel QuickSync changes being tested by HandBrake are not in the betas yet. They will appear when HandBrake integrates them into the main branch. Updated HandBrake core to SVN 5590. This adds a new FDK AAC encoder. The FAAC encoder has been removed and now...Employee Info Starter Kit: v6.0 - ASP.NET MVC Edition: Release Home - Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture EISK v6.0 – ASP.NET MVC edition bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End SpecificationsCreating a new employee record Read existing employee records Update an existing employee reco...OLAP PivotTable Extensions: Release 0.8.1: Use the 32-bit download for... Excel 2007 Excel 2010 32-bit (even Excel 2010 32-bit on a 64-bit operating system) Excel 2013 32-bit (even Excel 2013 32-bit on a 64-bit operating system) Use the 64-bit download for... Excel 2010 64-bit Excel 2013 64-bit Just download and run the EXE. There is no need to uninstall the previous release. If you have problems getting the add-in to work, see the Troubleshooting Installation wiki page. The new features in this release are: View #VALUE! Err...DirectXTex texture processing library: June 2013: June 15, 2013 Custom filtering implementation for Resize & GenerateMipMaps(3D) - Point, Box, Linear, Cubic, and Triangle TEX_FILTER_TRIANGLE finite low-pass triangle filter TEX_FILTER_WRAP, TEX_FILTER_MIRROR texture semantics for custom filtering TEX_FILTER_BOX alias for TEX_FILTER_FANT WIC Ordered and error diffusion dithering for non-WIC conversion sRGB gamma correct custom filtering and conversion DDS_FLAGS_EXPAND_LUMINANCE - Reader conversion option for L8, L16, and A8L8 legacy ...WPF Application Framework (WAF): WPF Application Framework (WAF) 3.0.0.440: Version: 3.0.0.440 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Please build the whole solution before you start one of the sample applications. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 2012) Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Samples: Use ValueConverters via StaticResource instead of x:Static. Other Downloads Downloads OverviewBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busFree language translator and file converter: Free Language Translator 3.3: some bug fixes and a new link to video tutorials on Youtube.Pokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Modern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadNew ProjectsAux Browser: This browser is secured by system level sandbox technology, and it helps you get where you want to go in the shortest possible time. Best solution to convert Outlook OST to PST files: Convert OST to PST without fearing for data loss. An immaculate converter by Recover Data gives user a privilege to perform OST file recovery with no data loss.Bitbucket.NET: A high-performance .NET library for developing applications that use the Bitbucket service.caosu: aaaChartApp: Chart App for SharePoint 2013Custom Membership Provider SQL + LDAP with one login page: Custom membership provider to allow users to login to there portal from one login page whether its custom SQLDB or the current Active Directory.DroidBrowse: Web browser for Android 1.6 or later.haseebtestProject: This project is created to add random files and for testing purposesHiveSense: Hive monitoring system.JQuery File Upload Plugin with Backload server side component (Demo/Examples): Backload is a professional, full featured ASP.NET MVC 4 file upload controller and handler (server side). JSLocator: Locate javascript functions in the sourceMoonCMS: This is a trivial thing. It doesn't make any sense!MyLabs: MyLabs is Private Labpcvvpes: pcvvpesPrism Model Factory Extensions: Micro framework for model cloning, equality check and mergingSharePoint 2007 Solution and Packaging Guidance: WSPSolution is a standard for building SP 2007 solutions in Visual Studio, namespace planning, deployment planning, WSP creation, and build automation.Windows Phone Wi-Fi Launcher: Wi-Fi settings page launcher for Windows Phone 8.0

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • CodePlex Daily Summary for Monday, May 31, 2010

    CodePlex Daily Summary for Monday, May 31, 2010New ProjectsAndrew's XNA Helpers: A collection of simple, yet useful methods and ways of accessing crucial variables such as the ContentManager or SpriteBatch from anywhere in your ...BASIC-DOS: BASIC-DOS OS Makes It Easier For People To Use DOS As It Comes With An Graphical User Interface That Loads Up During Boot Up. No Need To Type Any C...Chirpy - Visual Studio Add In For Handling Js, Css, and DotLess Files: Mashes, minifies, and validates your javascript, stylesheet, and dotless files.fprparser: Fortify XML Report parser in the form of an Excel Add-inHL7ToXmlConverter: Class library to transform HL7 Version 2.x to HL7Xml Version 2 depends on the used HL7 grammar.imdb movie downloader: basically download info from imdb and it s realy FAST ! need some development about thread pooling and webclient issues.. just try ;) for it run ...IMIfmoOptimisation: Задание по оптимизацииMarketView: MarketViewMediaStreamSources: 这是 CodePlex 上第一个可以呈现视频的 MediaStreamSource 项目。Migrate User Profile Values: A tool to move values between SharePoint 2007 User Profiles. The console application MigrateUserProfileValues.exe will export User Profile values ...Nexus6Studio Development Space: This is a working repository for development efforst of the Nexus6studio team.Project BlueLabel: BlueLabelSergioTools: SergioTools is a collection of tools and sample codes to help C# developers to improve your productivity and skills.SharePoint Property Bag Settings 2010: The Property Bag Settings can store any metadata as Key-Value pairs such as connection strings, server names, file paths, and other miscellaneous s...Silverlight Isolated Storage Cache: IsoCache is a small framework the make it easy to store dll's and xap files in the isolated storage so they can be use to speed up the startup of t...StackPivot: StackPivot is an app which can generate Microsoft Pivot "Collections" on-the-fly based on the data collected from the Stack Exchange APIs. Its d...Suspension Calculator: The Suspension Calculator aims to help people who are building race cars perform suspension related calculations. The calculations vary from motion...TFS Timesheets: A custom work item control for Team Foundation Server that allows timesheet data to be captured against a work item.Umbraco Membership infrastructure: Improvements to the Umbraco membership API implemented as a package. Includes user properties and improved membership API support.VolgaTransTelecomClient: VolgaTransTelecomClient makes it easier for clients of "Volga TransTelecom" company to get info about account. It's developed in C#.Wouter's SharePoint Demo Land: This site contains many of the SharePoint demos that I create while training or hobbying, both for the 2007 and the 2010 release. Please click the...New ReleasesAgUnit - Silverlight unit testing with ReSharper: AgUnit 0.1: Initial release of AgUnit. Copy the extracted files from AgUnit-0.1-ReSharper5.0.zip into the "Bin\Plugins\" folder of your ReSharper installatio...Andrew's XNA Helpers: Andrew's XNA Helpers v1: My first version of my Library of XNA Helpers. Includes: Variables class - A static class that can be accessed anywhere throughout your project -...Clean your Database: Database Cleaner Setup: Database Cleaner SetupClean your Database: Database Cleaner Source: Database Cleaner Source codeCommunity Forums NNTP bridge: Community Forums NNTP Bridge V16: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V17: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release is prim...Folder Bookmarks: Folder Bookmarks 1.5.8: The latest version of Folder Bookmarks (1.5.8), with new GUI improvements and 'Help' feature - all the instructions needed to use the software (If ...HL7ToXmlConverter: HL7ToXmlConverter Version 0.0.0.9: First distribution on codepleximdb movie downloader: 0.9 Fist Tryout of MyImdb SourceCode: Fist Tryout of MyImdb SourceCodeLightweight Fluent Workflow: Objectflow 1.0.0.2: The features of this release take advantage of .Net 3.5 featrures; Lamda support is back for Constraints and Functions can now be used in workflows...MDownloader: MDownloader-0.15.16.59384: Fixed password detector in context of .rar files. Fixed FileFactory provider implementation. Fixed detection of internet connection failures.MediaStreamSources: MediaStreamSources 1.0 Beta1: 1.0 Beta1Mongodb Management Studio: Mongodb Management Studio v1.1: MongodbManagementStudio v1.1 1.服务器管理功能 添加服务器,删除服务器 2.服务器,数据库,表,列,索引,树形显示和状态信息查看 3.查询分析器功能. 支持select,insert,Delete,update 支持自定义分页函数 $rowid(1,5) 查询...Multiplayer Quiz: Release 1_7_1_0: Latest Release. .NET 4.0 required to run server, and recommended for client. Download: http://www.microsoft.com/downloads/details.aspx?FamilyID=9cf...SCSM PowerShell Cmdlets: SCSM PowerShell Cmdlets Version 0.1.1: First release! This is a minimal build with limited funcitonallity. Should be handled as a preview of what's to come. Included Cmdlets are: New-...SharePoint Property Bag Settings 2010: PropertyBagSettings2010.wsp: The SharePoint Property Bag Settings is a reusable component that you can include in your own SharePoint applications. It can store simple types, s...Sharp Tests Ex: Sharp Tests Ex 1.0.0RTM: Project Description #TestsEx (Sharp Tests Extensions) is a set of extensible extensions. The main target is write short assertions where the Visual...Silverlight Testing Automation Tool: StatLight V1.1: FeaturesApplied some UnitDriven specific changes from justncase80/StatLight Updated to the 0.0.5 release over on rul:UnitDriven.codeplex.com Upda...SQL Server 2005 and 2008 - Backup, Integrity Check and Index Optimization: 30 May 2010: This is the latest version of my solution for Backup, Integrity Check and Index Optimization in SQL Server 2005, SQL Server 2008 and SQL Server 200...Suspension Calculator: SuspensionCalculator_V1.0.0.29: The Suspension Calculator aims to help people who are building race cars perform suspension related calculations. The calculations vary from motion...Svn2Svn: copy, sync, replay or reflect changes across SVN repositories: 1.2 (Beta): Build 1.2.8932.0. Added /incremental (/i) mode, svn2svn detects all previously synced revisions and starts at the latest revision that has not bee...VCC: Latest build, v2.1.30530.0: Automatic drop of latest buildVolgaTransTelecomClient: V.1.0.2.0: v.1.0.2.0 releaseWatchersNET.SkinObjects.ModulActionsMenu: ModulActionsMenu 01.00.01: changes CSS Fixed for the Microsoft Internet ExplorerWouter's SharePoint Demo Land: Navigation Service Basic: This sample shows how to create a simple Service Application for SharePoint 2010. You can read up on the how and why on my blog series about Serv...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsCommunity Forums NNTP bridgeAStar.netpatterns & practices – Enterprise LibraryBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationMirror Testing SystemIonics Isapi Rewrite FilterCustomer Portal Accelerator for Microsoft Dynamics CRMN2 CMSpatterns & practices: Windows Azure Security Guidance

    Read the article

  • Android Can't get two virtual joysticks to move independently and at the same time

    - by Cole
    @Override public boolean onTouch(View v, MotionEvent event) { // TODO Auto-generated method stub float r = 70; float centerLx = (float) (screenWidth*.3425); float centerLy = (float) (screenHeight*.4958); float centerRx = (float) (screenWidth*.6538); float centerRy = (float) (screenHeight*.4917); float dx = 0; float dy = 0; float theta; float c; int action = event.getAction(); int actionCode = action & MotionEvent.ACTION_MASK; int pid = (action & MotionEvent.ACTION_POINTER_INDEX_MASK) >> MotionEvent.ACTION_POINTER_INDEX_SHIFT; int fingerid = event.getPointerId(pid); int x = (int) event.getX(pid); int y = (int) event.getY(pid); c = FloatMath.sqrt(dx*dx + dy*dy); theta = (float) Math.atan(Math.abs(dy/dx)); switch (actionCode) { case MotionEvent.ACTION_DOWN: case MotionEvent.ACTION_POINTER_DOWN: //if touching down on left stick, set leftstick ID to this fingerid. if(x < screenWidth/2 && c<r*.8) { lsId = fingerid; dx = x-centerLx; dy = y-centerLy; touchingLs = true; } else if(x > screenWidth/2 && c<r*.8) { rsId = fingerid; dx = x-centerRx; dy = y-centerRy; touchingRs = true; } break; case MotionEvent.ACTION_MOVE: if (touchingLs && fingerid == lsId) { dx = x - centerLx; dy = y - centerLy; }else if (touchingRs && fingerid == rsId) { dx = x - centerRx; dy = y - centerRy; } c = FloatMath.sqrt(dx*dx + dy*dy); theta = (float) Math.atan(Math.abs(dy/dx)); //if touching outside left radius and moving left stick if(c >= r && touchingLs && fingerid == lsId) { if(dx>0 && dy<0) { //top right quadrant lsX = r * FloatMath.cos(theta); lsY = -(r * FloatMath.sin(theta)); Log.i("message", "top right"); } if(dx<0 && dy<0) { //top left quadrant lsX = -(r * FloatMath.cos(theta)); lsY = -(r * FloatMath.sin(theta)); Log.i("message", "top left"); } if(dx<0 && dy>0) { //bottom left quadrant lsX = -(r * FloatMath.cos(theta)); lsY = r * FloatMath.sin(theta); Log.i("message", "bottom left"); } else if(dx > 0 && dy > 0){ //bottom right quadrant lsX = r * FloatMath.cos(theta); lsY = r * FloatMath.sin(theta); Log.i("message", "bottom right"); } } if(c >= r && touchingRs && fingerid == rsId) { if(dx>0 && dy<0) { //top right quadrant rsX = r * FloatMath.cos(theta); rsY = -(r * FloatMath.sin(theta)); Log.i("message", "top right"); } if(dx<0 && dy<0) { //top left quadrant rsX = -(r * FloatMath.cos(theta)); rsY = -(r * FloatMath.sin(theta)); Log.i("message", "top left"); } if(dx<0 && dy>0) { //bottom left quadrant rsX = -(r * FloatMath.cos(theta)); rsY = r * FloatMath.sin(theta); Log.i("message", "bottom left"); } else if(dx > 0 && dy > 0) { rsX = r * FloatMath.cos(theta); rsY = r * FloatMath.sin(theta); Log.i("message", "bottom right"); } } else { if(c < r && touchingLs && fingerid == lsId) { lsX = dx; lsY = dy; } if(c < r && touchingRs && fingerid == rsId){ rsX = dx; rsY = dy; } } break; case MotionEvent.ACTION_UP: case MotionEvent.ACTION_POINTER_UP: if (fingerid == lsId) { lsId = -1; lsX = 0; lsY = 0; touchingLs = false; } else if (fingerid == rsId) { rsId = -1; rsX = 0; rsY = 0; touchingRs = false; } break; } return true; } There's a left joystick and a right joystick. Right now only one will move at a time. If someone could set me on the right track I would be incredibly grateful cause I've been having nightmares about this problem.

    Read the article

  • Game Over function is not working Starling

    - by aNgeLyN omar
    I've been following a tutorial over the web but it somehow did not show something about creating a game over function. I am new to the Starling framework and Actionscript so I'm kind of still trying to find a way to make it work. Here's the complete snippet of the code. package screens { import flash.geom.Rectangle; import flash.utils.getTimer; import events.NavigationEvent; import objects.GameBackground; import objects.Hero; import objects.Item; import objects.Obstacle; import starling.display.Button; import starling.display.Image; import starling.display.Sprite; import starling.events.Event; import starling.events.Touch; import starling.events.TouchEvent; import starling.text.TextField; import starling.utils.deg2rad; public class InGame extends Sprite { private var screenInGame:InGame; private var screenWelcome:Welcome; private var startButton:Button; private var playAgain:Button; private var bg:GameBackground; private var hero:Hero; private var timePrevious:Number; private var timeCurrent:Number; private var elapsed:Number; private var gameState:String; private var playerSpeed:Number = 0; private var hitObstacle:Number = 0; private const MIN_SPEED:Number = 650; private var scoreDistance:int; private var obstacleGapCount:int; private var gameArea:Rectangle; private var touch:Touch; private var touchX:Number; private var touchY:Number; private var obstaclesToAnimate:Vector.<Obstacle>; private var itemsToAnimate:Vector.<Item>; private var scoreText:TextField; private var remainingLives:TextField; private var gameOverText:TextField; private var iconSmall:Image; static private var lives:Number = 2; public function InGame() { super(); this.addEventListener(starling.events.Event.ADDED_TO_STAGE, onAddedToStage); } private function onAddedToStage(event:Event):void { this.removeEventListener(Event.ADDED_TO_STAGE, onAddedToStage); drawGame(); scoreText = new TextField(300, 100, "Score: 0", "MyFontName", 35, 0xD9D919, true); remainingLives = new TextField(600, 100, "Lives: " + lives +" X ", "MyFontName", 35, 0xD9D919, true); iconSmall = new Image(Assets.getAtlas().getTexture("darnahead1")); iconSmall.x = 360; iconSmall.y = 40; this.addChild(iconSmall); this.addChild(scoreText); this.addChild(remainingLives); } private function drawGame():void { bg = new GameBackground(); this.addChild(bg); hero = new Hero(); hero.x = stage.stageHeight / 2; hero.y = stage.stageWidth / 2; this.addChild(hero); startButton = new Button(Assets.getAtlas().getTexture("startButton")); startButton.x = stage.stageWidth * 0.5 - startButton.width * 0.5; startButton.y = stage.stageHeight * 0.5 - startButton.height * 0.5; this.addChild(startButton); gameArea = new Rectangle(0, 100, stage.stageWidth, stage.stageHeight - 250); } public function disposeTemporarily():void { this.visible = false; } public function initialize():void { this.visible = true; this.addEventListener(Event.ENTER_FRAME, checkElapsed); hero.x = -stage.stageWidth; hero.y = stage.stageHeight * 0.5; gameState ="idle"; playerSpeed = 0; hitObstacle = 0; bg.speed = 0; scoreDistance = 0; obstacleGapCount = 0; obstaclesToAnimate = new Vector.<Obstacle>(); itemsToAnimate = new Vector.<Item>(); startButton.addEventListener(Event.TRIGGERED, onStartButtonClick); //var mainStage:InGame =InGame.current.nativeStage; //mainStage.dispatchEvent(new Event(Event.COMPLETE)); //playAgain.addEventListener(Event.TRIGGERED, onRetry); } private function onStartButtonClick(event:Event):void { startButton.visible = false; startButton.removeEventListener(Event.TRIGGERED, onStartButtonClick); launchHero(); } private function launchHero():void { this.addEventListener(TouchEvent.TOUCH, onTouch); this.addEventListener(Event.ENTER_FRAME, onGameTick); } private function onTouch(event:TouchEvent):void { touch = event.getTouch(stage); touchX = touch.globalX; touchY = touch.globalY; } private function onGameTick(event:Event):void { switch(gameState) { case "idle": if(hero.x < stage.stageWidth * 0.5 * 0.5) { hero.x += ((stage.stageWidth * 0.5 * 0.5 + 10) - hero.x) * 0.05; hero.y = stage.stageHeight * 0.5; playerSpeed += (MIN_SPEED - playerSpeed) * 0.05; bg.speed = playerSpeed * elapsed; } else { gameState = "flying"; } break; case "flying": if(hitObstacle <= 0) { hero.y -= (hero.y - touchY) * 0.1; if(-(hero.y - touchY) < 150 && -(hero.y - touchY) > -150) { hero.rotation = deg2rad(-(hero.y - touchY) * 0.2); } if(hero.y > gameArea.bottom - hero.height * 0.5) { hero.y = gameArea.bottom - hero.height * 0.5; hero.rotation = deg2rad(0); } if(hero.y < gameArea.top + hero.height * 0.5) { hero.y = gameArea.top + hero.height * 0.5; hero.rotation = deg2rad(0); } } else { hitObstacle-- cameraShake(); } playerSpeed -= (playerSpeed - MIN_SPEED) * 0.01; bg.speed = playerSpeed * elapsed; scoreDistance += (playerSpeed * elapsed) * 0.1; scoreText.text = "Score: " + scoreDistance; initObstacle(); animateObstacles(); createEggItems(); animateItems(); remainingLives.text = "Lives: "+lives + " X "; if(lives == 0) { gameState = "over"; } break; case "over": gameOver(); break; } } private function gameOver():void { gameOverText = new TextField(800, 400, "Hero WAS KILLED!!!", "MyFontName", 50, 0xD9D919, true); scoreText = new TextField(800, 600, "Score: "+scoreDistance, "MyFontName", 30, 0xFFFFFF, true); this.addChild(scoreText); this.addChild(gameOverText); playAgain = new Button(Assets.getAtlas().getTexture("button_tryAgain")); playAgain.x = stage.stageWidth * 0.5 - startButton.width * 0.5; playAgain.y = stage.stageHeight * 0.75 - startButton.height * 0.75; this.addChild(playAgain); playAgain.addEventListener(Event.TRIGGERED, onRetry); } private function onRetry(event:Event):void { playAgain.visible = false; gameOverText.visible = false; scoreText.visible = false; var btnClicked:Button = event.target as Button; if((btnClicked as Button) == playAgain) { this.dispatchEvent(new NavigationEvent(NavigationEvent.CHANGE_SCREEN, {id: "playnow"}, true)); } disposeTemporarily(); } private function animateItems():void { var itemToTrack:Item; for(var i:uint = 0; i < itemsToAnimate.length; i++) { itemToTrack = itemsToAnimate[i]; itemToTrack.x -= playerSpeed * elapsed; if(itemToTrack.bounds.intersects(hero.bounds)) { itemsToAnimate.splice(i, 1); this.removeChild(itemToTrack); } if(itemToTrack.x < -50) { itemsToAnimate.splice(i, 1); this.removeChild(itemToTrack); } } } private function createEggItems():void { if(Math.random() > 0.95){ var itemToTrack:Item = new Item(Math.ceil(Math.random() * 10)); itemToTrack.x = stage.stageWidth + 50; itemToTrack.y = int(Math.random() * (gameArea.bottom - gameArea.top)) + gameArea.top; this.addChild(itemToTrack); itemsToAnimate.push(itemToTrack); } } private function cameraShake():void { if(hitObstacle > 0) { this.x = Math.random() * hitObstacle; this.y = Math.random() * hitObstacle; } else if(x != 0) { this.x = 0; this.y = 0; lives--; } } private function initObstacle():void { if(obstacleGapCount < 1200) { obstacleGapCount += playerSpeed * elapsed; } else if(obstacleGapCount !=0) { obstacleGapCount = 0; createObstacle(Math.ceil(Math.random() * 5), Math.random() * 1000 + 1000); } } private function animateObstacles():void { var obstacleToTrack:Obstacle; for(var i:uint = 0; i<obstaclesToAnimate.length; i++) { obstacleToTrack = obstaclesToAnimate[i]; if(obstacleToTrack.alreadyHit == false && obstacleToTrack.bounds.intersects(hero.bounds)) { obstacleToTrack.alreadyHit = true; obstacleToTrack.rotation = deg2rad(70); hitObstacle = 30; playerSpeed *= 0.5; } if(obstacleToTrack.distance > 0) { obstacleToTrack.distance -= playerSpeed * elapsed; } else { if(obstacleToTrack.watchOut) { obstacleToTrack.watchOut = false; } obstacleToTrack.x -= (playerSpeed + obstacleToTrack.speed) * elapsed; } if(obstacleToTrack.x < -obstacleToTrack.width || gameState == "over") { obstaclesToAnimate.splice(i, 1); this.removeChild(obstacleToTrack); } } } private function checkElapsed(event:Event):void { timePrevious = timeCurrent; timeCurrent = getTimer(); elapsed = (timeCurrent - timePrevious) * 0.001; } private function createObstacle(type:Number, distance:Number):void{ var obstacle:Obstacle = new Obstacle(type, distance, true, 300); obstacle.x = stage.stageWidth; this.addChild(obstacle); if(type >= 4) { if(Math.random() > 0.5) { obstacle.y = gameArea.top; obstacle.position = "top" } else { obstacle.y = gameArea.bottom - obstacle.height; obstacle.position = "bottom"; } } else { obstacle.y = int(Math.random() * (gameArea.bottom - obstacle.height - gameArea.top)) + gameArea.top; obstacle.position = "middle"; } obstaclesToAnimate.push(obstacle); } } }

    Read the article

< Previous Page | 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605  | Next Page >