Search Results

Search found 83849 results on 3354 pages for 'net framework client pro'.

Page 76/3354 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Excel - referenced values via OleDB from .Net client

    - by ho
    I'm trying to read an Excel file (.xls, I think Excel 2003 compatible) via OleDB, but it fails to get the values for referenced fields. This is my current test code (please note, this is just part of the class): Private m_conn As OleDbConnection Public Sub New(ByVal fileName As String) Dim connString As String = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Extended Properties=Excel 8.0;", fileName) m_conn = New OleDbConnection(connString) m_conn.Open() End Sub Public Sub GetSheet(ByVal sheet As String) Dim query As String = String.Format("SELECT * FROM [{0}]", sheet) Using cmd As OleDbCommand = m_conn.CreateCommand() cmd.CommandText = query Dim ds As New DataSet() Dim a As New OleDbDataAdapter(cmd) Using rdr As OleDbDataReader = cmd.ExecuteReader() While rdr.Read() Debug.WriteLine(rdr.Item(0).ToString()) End While End Using End Using End Sub But if the value is a reference (something like =+'MySheetName'!K37), I just get a DBNull from the call to rdr.Item(0). I can get around this by automating Excel instead, but would prefer not to have to use Excel automation so wondering if anyone knows how to do it.

    Read the article

  • Create a real time web application using .net framework and azure - very confused

    - by test
    Let us say I would like a simple (yet complex) web application where there is continuous READING AND WRITING to the sql azure database. Let us say I am tracking a location, and I would like it to be updated very frequently (lets take the worst case: 1 second). From the little knowledge I have, I think that this involves the use of the database to continuously write the location to the database, and continuously read from the database to update another person through a website. Do you please have any suggestion which technologies can I use? Is there a simple way? I heard about node.js, signalR. I have no idea how to use them, if they are really what I need. the last tutorial I checked out simply uses a while(true) loop..but I don't think that that's something good to keep a thread continuously busy... Do I have to create some background task? Do I have to create some web service? This is a school project and I wish not to go for the most difficult option, but if there is some sort of solution, challenge accepted :) Can you please help me? Since I have asked many questions here and yet I have no solution in mind

    Read the article

  • validation controls are not validating on enabling on client side using java script, plz guide

    - by haansi
    Hi, As per requirement I disabled all validation controls in page on PageLoad event in server side. On clicking submit button I want to activate them and validate the page and if the page is ok submit other wise not. I am able to enable all validaters but one thing that I am unable to understand is that they do not validate the page. I set alerts and check they are being enabled but they do not validate the page and let the page submit. I am sorry I couldn't get where I am wrong, may be there need to call some validation method as well or I should prevent default behavior of button. please guide me. Below is my script: <script type="text/javascript"> function NextClicked() { var _ddlStatus = document.getElementById("<%=ddlEmpStatus.ClientID%>"); var _selectedIndex = _ddlStatus.selectedIndex; if (_selectedIndex == 0) { alert("Nothing selected"); }<br/> else<br/> if (_selectedIndex == 1) { for (i = 0; i < Page_Validators.length; i++) { Page_Validators[i].Enabled = true; } } } </script> thanks in anticipation.

    Read the article

  • sql server - framework 4 - IIS 7 weird sort from db to page

    - by ila
    I am experiencing a strange behavior when reading a resultset from database in a calling method. The sort of the rows is different from what the database should return. My farm: - database server: sql server 2008 on a WinServer 2008 64 bit - web server: a couple of load balanced WinServer 2008 64 bit running IIS 7 The application runs on a v4.0 app pool, set to enable 32bit applications Here's a description of the problem: - a stored procedure is called, that returns a resultset sorted on a particular column - I can see thru profiler the call to the SP, if I run the statement I see correct sorting - the calling page gets the results, and before any further elaboration logs the rows immediately after the SP execution - the results are in a completely different order (I cannot even understand if they are sorted in any way) Some details on the Stored Procedure: - it is called by code using a SqlDatAdapter - it has also an output value (a count of the rows) that is read correctly - which sort field is to be used is passed as a parameter - makes use of temp tables to collect data and perform the desired sort Any idea on what I could check? Same code and same database work correctly in a test environment, 32 bit and not load balanced.

    Read the article

  • Creating Html Helper Method - MVC Framework

    - by nettguy
    I am learning MVC from Stephen Walther tutorials on MSDN website. He suggests that we can create Html Helper method. Say Example using System; namespace MvcApplication1.Helpers { public class LabelHelper { public static string Label(string target, string text) { return String.Format("<label for='{0}'>{1}</label>", target, text); } } } My Question under which folder do i need to create these class? View folder or controller folder? or can i place it in App_Code folder?

    Read the article

  • UpdatePanel refresh from client

    - by Voice
    Hi I'm trying to refresh update panel via Javascript: __doPostBack("<%=upMyPanel.ClientID %>", ""); But somehow its controls are all empty. On other hand they are all filled when I click any trigger control. How can I fix this? thanx.

    Read the article

  • impossible to install Framework 3.5

    - by yae
    Hi few days ago I formatted one of my computers (XP Home SP2 with prior updates installed). Now I am trying to install NET Framework 3.5, but display this error: [03/21/10,17:19:36] Microsoft .NET Framework 2.0a: [2] Error: Installation failed for component Microsoft .NET Framework 2.0a. MSI returned error code 1603 [03/21/10,17:20:17] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a is not installed. Already i have tried this to try solve this issue, but the problem persists. 1.- unistall all frameworks and reinstall from scratch 2.- clean all temps file and reinstall from scratch 3.- unistall and clean all frameworks with dotnetfx cleanup tool and reinstall from scratch 4.- install .NET Framework 3.5 full package But all these possibles solutions have failed. Any idea to solve this? Thanks in advance

    Read the article

  • Customer provider Password Reset client.

    - by ProfK
    I'm looking for guidence on writing a custom password reset UI, but it must fit the Provider 'Pattern', or degrade silently to built-in defaults. E.g. my Reset Control must collect extra information, and perform differently to the standard Password Recovery Control. It must close as possible use the standard MembershipProvider interface for standard functions, and only use an extended interface for the non-standard stuff. I'd like some reading on issues such as, what must I ask the Membership Provider for, and what must I do myself. What must I tell the provider (service?) about what I do? Etc.

    Read the article

  • Python client / server question

    - by AustinM
    I'm working on a bit of a project in python. I have a client and a server. The server listens for connections and once a connection is received it waits for input from the client. The idea is that the client can connect to the server and execute system commands such as ls and cat. This is my server code: import sys, os, socket host = '' port = 50105 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host, port)) print("Server started on port: ", port) s.listen(5) print("Server listening\n") conn, addr = s.accept() print 'New connection from ', addr while (1): rc = conn.recv(5) pipe = os.popen(rc) rl = pipe.readlines() file = conn.makefile('w', 0) file.writelines(rl[:-1]) file.close() conn.close() And this is my client code: import sys, socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = 'localhost' port = input('Port: ') s.connect((host, port)) cmd = raw_input('$ ') s.send(cmd) file = s.makefile('r', 0) sys.stdout.writelines(file.readlines()) When I start the server I get the right output, saying the server is listening. But when I connect with my client and type a command the server exits with this error: Traceback (most recent call last): File "server.py", line 21, in <module> rc = conn.recv(2) File "/usr/lib/python2.6/socket.py", line 165, in _dummy raise error(EBADF, 'Bad file descriptor') socket.error: [Errno 9] Bad file descriptor On the client side, I get the output of ls but the server gets screwed up.

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Crazy idea: Connect .NET and SAP with SAP JCo using IKVM.NET

    - by Kottan
    Because the SAP Connector for .NET is no longer maintained by SAP, I am now looking for an alternative to connect the Microsoft world with the SAP world. I know there a third party products like ERPConnect, but I want to do this with tools from SAP. Therefore there arised the crazy idea to use the SAP Java Connector in combination with the tool IKVM.NET (www.ikvm.net/devguide/net2java.html). IKVM.NET provides The IKVMC tool, which converts Java bytecode to .NET dll's and exe's. "No sooner said than done!" I converted the SAP JCo to .NET dlls and created a new Visual Studio solution. I put all the JCO files into a subdirectory of my solution. I set 2 references to the generated IKVM.OpenJDK.Core.dll and sapjco.dll. Great, all JCO classes where now available as .NET classes. Full of optimism I wrote some little code to connect to a SAP system. JCO.Client client = null; client = JCO.createClient(...) The compiliation of my testcode had no errors. "Wonderful !" I thought. Then I started my tetstapplication. Unfortunately I got an exception calling JCO.createClient: Could not load middleware layer 'com.sap.mw.jco.rfc.MiddlewareRFC'\r\nno sapjcorfc in java.library.path I have 2 questions on this topic. 1) Do you think my idea using SAP Java Connector to connect .NET with SAP is a good idea or is it nonsens ? Perhaps someone had already the same idea ;-) 2) How can the above exception be solved ?

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • Browser Specific Extensions of HttpClient

    - by imran_ku07
            Introduction:                     REpresentational State Transfer (REST) causing/leaving a great impact on service/API development because it offers a way to access a service without requiring any specific library by embracing HTTP and its features. ASP.NET Web API makes it very easy to quickly build RESTful HTTP services. These HTTP services can be consumed by a variety of clients including browsers, devices, machines, etc. With .NET Framework 4.5, we can use HttpClient class to consume/send/receive RESTful HTTP services(for .NET Framework 4.0, HttpClient class is shipped as part of ASP.NET Web API). The HttpClient class provides a bunch of helper methods(for example, DeleteAsync, PostAsync, GetStringAsync, etc.) to consume a HTTP service very easily. ASP.NET Web API added some more extension methods(for example, PutAsJsonAsync, PutAsXmlAsync, etc) into HttpClient class to further simplify the usage. In addition, HttpClient is also an ideal choice for writing integration test for a RESTful HTTP service. Since a browser is a main client of any RESTful API, it is also important to test the HTTP service on a variety of browsers. RESTful service embraces HTTP headers and different browsers send different HTTP headers. So, I have created a package that will add overloads(with an additional Browser parameter) for almost all the helper methods of HttpClient class. In this article, I will show you how to use this package.           Description:                     Create/open your test project and install ImranB.SystemNetHttp.HttpClientExtensions NuGet package. Then, add this using statement on your class, using ImranB.SystemNetHttp;                     Then, you can start using any HttpClient helper method which include the additional Browser parameter. For example,  var client = new HttpClient(myserver); var task = client.GetAsync("http://domain/myapi", Browser.Chrome); task.Wait(); var response = task.Result; .                     Here is the definition  of Browser, public enum Browser { Firefox = 0, Chrome = 1, IE10 = 2, IE9 = 3, IE8 = 4, IE7 = 5, IE6 = 6, Safari = 7, Opera = 8, Maxthon = 9, }                     These extension methods will make it very easy to write browser specific integration test. It will also help HTTP service consumer to mimic the request sending behavior of a browser. This package source is available on github. So, you can grab the source and add some additional behavior on the top of these extensions.         Summary:                     Testing a REST API is an important aspect of service development and today, testing with a browser is crucial. In this article, I showed how to write integration test that will mimic the browser request sending behavior. I also showed an example. Hopefully you will enjoy this article too.

    Read the article

  • I need help solving a rather weird error in a WCF service.

    - by Moulde
    Hi.. I have a solution that contains three projects. A main project with my MVC app, a silverlight application and a (silverlight enabled) WCF service project. In my silverlight project i have made a Service Reference to my WCF service. And i pretty much got that working. In my WCF service i have a method that returns an Book object, which got some random fields like title, date etc. In the book class, i have a ICollection field that contains a list of events. The book class is generated using entity framework 4.0, and Lazy Loading is enabled. If i in my getBook(int id) method return a book with the events field not initialized, it works as a charm. But if i initialize the field, i'm getting this error. The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. I have a few ideas why that is happening, and while writing this i just got another one. The wcf service somehow threw away the reference to the event class. That would be very weird since i have a reference between my main mvc app (with the models) and my WCF service. Since i have enabled lazy loading in EF 4.0, i suspect that it may be the thing generating the error. But i'm not sure why that would be, because i'm not in any way accessing that field. I could understand that i may not be able to access the events field after i recive the object in my silverlight application since the connection between the book object and the entity framework is like broken. Did i mention that Lazy Loading is enabled on my EF instance? And there is no inner exception in the thrown exception. Thanks in advance. Malte Baden Hansen

    Read the article

  • Is there a stable email client or email client plugin that will support GMail-like conversations?

    - by Christopher Done
    Conversations in GMail are such an incredibly simple yet powerful way to look at mail. I get about twenty emails a day from various clients and I hate looking at a flat file of all emails, and I hate looking at "per client" email folders. With GMail I can just look at a specific conversation. I can see a few lines from each in the collapsed titles so that I can quickly remember what was said previously, and I can expand them if I need details or need to quote something. But I can't justify using GMail to handle sensitive work emails, and we need to be able to access all my emails offline. So what do I use?

    Read the article

  • How to improve the HTML formatting in Evolution mail client

    - by Tom
    I have a question about viewing HTML emails in the Evolution mail client. Basically, I am receiving some emails that look lovely in Thunderbird but not in Evolution because the HTML rendering of Evolution isn't as advanced. Does anyone know how to improve the HTML rendering of Evolution? e.g. a plugin, tip, code patch, etc... The closest I've got is to right-click the email, "Save As...", save as a html file, then open in Firefox. Not exactly streamline! What emails can't it display well? We use the subversion revision control system which is set up to send an email whenever someone commits via svnnotify all nicely coloured via the --handler HTML::ColorDiff -d parameter. When Evolution fails to use the colours, I find it very had to read the raw diff.

    Read the article

  • Connecting to a secure, graphical, x session using a stateless thin client

    - by npeterson
    I'm looking for an open source, non-proprietery solution to this problem, any help would be appreciated. I'd like to setup a server running Ubuntu Server. I'd then like to connect to this server from a stateless thin client and use an x-session. This would occur mostly over local area networks, but also possibly over the internet. What would be the ideal set of software to accomplish this from a security, and usability standpoint? Are there any ready-made stateless thin clients that don't require proprietery software?

    Read the article

  • Email client supporting multiple accounts

    - by TGP1994
    I've been using Microsoft Outlook for a very long time, although one thing that has bugged me is how multiple email accounts are handled. As far as I can tell, there isn't a set and straightforward way of managing multiple accounts in one instance of outlook. For example, when I create an email, saving it as a draft will by default dump it into the first personal folder that I have open, which in my current case, is not where I want it. I would like all trash, spam, drafts, contacts, etc. etc. to be handled on a PF by PF basis. Now to my question: Is there a way to accomplish the task of email account "segregation" in Outlook (2007 is my current version), or is there another client that handles this in a more organized fashion? Note: I don't use most of the features in outlook (I hardly even need special formatting for my messages), I generally just send and read mail, and get a few attachments, so leaving Outlook wouldn't be too much of a stretch for me.

    Read the article

  • Creating Client Certificate - Windows

    - by Aur
    I am trying to create client certifications against a microsoft CA using the built in website. (Microsoft Active directory Certificate Services) From what I can tell you have to login as the user to create the corresponding certificate. Is there anyway to get around that? I tried to create my own template duplicating the user tempalte but it doesn't match and gets rejected when trying to authenticate. Is this something I'd have to look at building? Any help is appreciative, thank you for your time.

    Read the article

  • Remote Software Solution that Acts as a Client

    - by Richard
    I am looking for something that I am not sure exists. I have a remote computer that will not allow incoming traffic due to ISP blocking of ports(basically double NAT situation that I am unable to get around). I am wondering if I have a computer acting as a client, is there any solution out there that will allow remote access to the computer. I do have other servers on the net that have static IP's that the computer could initiate a connection with. I am thinking of using Debian Linux, However computer is not built yet so OS is not overly important at this point.

    Read the article

  • Auto login CISCO VPN client on linux [closed]

    - by user70704
    Hi, I have installed Cisco vpn client on my linux system (Fedora core 8). After login, every time, i need to run the following command VPNC to connect the VPN server. VPNC command request the input data from the user, IPSec gateway : IPSec ID: IPSec Secret: Username: Password: So, my requirement is, can i connect the VPN server through any single command?. I feel so lazy to enter the above requirements at every time. I want to connect the VPN Server on boot startup. I was try using expect script, but i can't. Thanks in advance.

    Read the article

  • .NET Web Service (asmx) Timeout Problem

    - by Barry Fandango
    I'm connecting to a vendor-supplied web ASMX service and sending a set of data over the wire. My first attempt hit the 1 minute timeout that Visual Studio throws in by default in the app.config file when you add a service reference to a project. I increased it to 10 minutes, another timeout. 1 hour, another timeout: Error: System.TimeoutException: The request channel timed out while waiting for a reply after 00:59:59.6874880. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutE xception: The HTTP request to 'http://servername/servicename.asmx' has exceeded the allotted timeout of 01:00:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.Net.WebExcept ion: The operation has timed out at System.Net.HttpWebRequest.GetResponse() [... lengthly stacktrace follows] I contacted the vendor. They confirmed the call may take over an hour (don't ask, they are the bane of my existence.) I increased the timeout to 10 hours to be on the safe side. However the web service call continues to time out at 1 hour. The relevant app.config section now looks like this: <basicHttpBinding> <binding name="BindingName" closeTimeout="10:00:00" openTimeout="10:00:00" receiveTimeout="10:00:00" sendTimeout="10:00:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2147483647" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> Pretty absurd, but regardless the timeout is still kicking in at 1 hour. Unfortunately every change takes at least an additional hour to test. Is there some internal limit that I'm bumping into - another timeout setting to be changed somewhere? All changes to these settings up to one hour had the expected effect. Thanks for any help you can provide!

    Read the article

  • SQL Server connection string Asynchronous Processing=true

    - by George2
    Hello everyone, I am using .Net 2.0 + SQL Server 2005 Enterprise + VSTS 2008 + C# + ADO.Net to develop ASP.Net Web application. My question is, if I am using Asynchronous Processing=true with SQL Server authentication mode (not Windows authentication mode, i.e. using sa account and password in connection string in web.config), I am wondering whether Asynchronous Processing=true will impact performance of my web application (or depends on my ADO.Net code implementation pattern/scenario)? And why? thanks in advance, George

    Read the article

  • User created Validator wont call Client side validation Javascript on 'complex' user control.

    Hi All, I have created a user control (from System.Web.UI.UserControl), and created my own validator for the user control (from System.Web.UI.WebControls.BaseValidator). Everything works ok until I try to get the user control to do client side validation. While trying to debug this issue I have set 'Control to Validate' to a text box instead of the custom user control, and the client side script works fine! It appears to me that it has an a issue with my composite user control I have created. Has anyone encountered this issue before? Has anyone else seen client side validation fail on custom user controls? Some extra info : The composite control is a drop down list and 'loader image', as it is a ajax enabled drop down list (using ICallbackEventHandler). I know that the client side javascript is being written to the page, and have placed an alert('random message') as the first line in the validator function that only appears if it is validating a text box (i.e. not when it is validating my custom control) Language : C# (ASP.NET 2.0) and jQuery 1.2.6 in aspx file : <rms:UserDDL ID="ddlUserTypes" runat="server" PreLoad="true" /> <rms:DDLValidator ID="userTypesVal" ControlToValidate="ddlUserTypes" ErrorMessage="You have not selected a UserType" runat="server" Text="You have not selected a UserType" Display="Dynamic" EnableClientScript="true" /> in validator code behind protected string ScriptBlock { get { string nl = System.Environment.NewLine; return "<script type=\"text/javascript\">" + nl + " function " + ScriptBlockFunctionName + "(ctrl)" + nl + " {" + nl + " alert('Random message'); " + nl + " var selVal = $('#' + ctrl.controltovalidate).val(); " + nl + " alert(selVal);" + nl + " if (selVal === '-1') return false; " + nl + " return false; " + nl + " }" + nl + "</script>"; } } protected override void OnPreRender(EventArgs e) { if (this.DetermineRenderUplevel() && this.EnableClientScript) { Page.ClientScript.RegisterExpandoAttribute(this.ClientID, "evaluationfunction", this.ScriptBlockFunctionName); Page.ClientScript.RegisterClientScriptBlock(GetType(), this.ScriptBlockKey, this.ScriptBlock); } base.OnPreRender(e); } I know my ControlPropertiesValid() and EvaluateIsValid() work ok. I appreciate any help on this issue. Noel.

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >