Search Results

Search found 64186 results on 2568 pages for 'access control service'.

Page 100/2568 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • How can I sync Access databases and keep them up-to-date?

    - by user327472
    I have an Access database on my server. We split it up and use the front-end database for search data and adding new records or reports in local computer. If we update or add a new record, that writes to the back-end of database. I want to use this database in the other building with other servers. Also, those servers have no direct connection. How can I sync both back-end databases to keep the database data up to date? These details may be useful: It's a big amount of data - about 25,750 client records. I guess there are more than 25 tables at 80 MB.

    Read the article

  • How to configure permissions for win2008 task running as Network Service to stop/start a service on different system?

    - by weiji
    Well... title says it all, actually. We've got a Scheduled Task set up on a windows server 2003 box running as the Network Service, and the batch file it runs will invoke "sc" to stop and then start a service on another windows box, however sc reports: [SC] OpenService FAILED 5: Access is denied. Running the same batch file via the windows explorer has no issues, and my user account is part of the Administrators group so I believe this is why there are no issues when I try it manually. Is this a permissions thing I enable for Network Service on the first server? Or do I enable permissions for Network Service somehow on the target server? This question (http://serverfault.com/questions/19382/why-sc-query-fails-from-one-machine-but-works-from-another) touches on something similar, but I'm looking for enabling the Network Service to access the service via the scheduled task.

    Read the article

  • T-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers

    - by joycsharp
    I just released an open source project at codeplex, which includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are  based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable. In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers: Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model. Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing. Download it to make your web development productive. Enjoy!

    Read the article

  • OCS 2007 Access Edge Server Certificate issue

    - by BWCA
    We are currently building additional OCS 2007 R2 Access Edge Servers to handle additional capacity.  We ran into a SSL certificate issue when we were setting up the servers. Before running the steps to Deploy an Edge Server, we successfully imported our SSL certificate that we use for external access on all of the new servers.  After successfully completing the first three Deploy Edge Server steps one one of the new servers, we started working on Step 4: Configure Certificates for the Edge Server.  After selecting Assign an existing certificate from the common tasks list and clicking Next to select a certificate, there were no certificates listed as shown below.   The first thing we did was to use the Certificates mmc snap-in to review the SSL certificate information.  We noticed in the General tab that Windows does not have enough information to verify this certificate and in the Certification Path that the issuer of this certificate could not be found for the SSL certificate that we imported successfully earlier.     While troubleshooting, we learned that we could not access the URL for the certificate’s CRL to download the CRL file due to restrictive firewall rules between the new OCS 2007 R2 Access Edge Servers and the Internet. After modifying the firewall rules, we were able to download the CRL file and when we reran Step 4 to assign an existing certificate, the certificate was listed.

    Read the article

  • Access Officejet Pro L7590 memory card reader

    - by luri
    I can't manage to access my printer's memory card reader in Nautilus. I can just access it with hp-unload. Here's a sample output from this command: lubuntu@L-X6:~$ hp-unload hp:/net/Officejet_Pro_L7500?zc=HP065193 HP Linux Imaging and Printing System (ver. 3.10.6) Photo Card Access Utility ver. 3.3 Copyright (c) 2001-9 Hewlett-Packard Development Company, LP This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to distribute it under certain conditions. See COPYING file for more details. Using device: hp:/net/Officejet_Pro_L7500?zc=HP065193 |error: Photo card write failed (Card may be write protected) / Photocard on device hp:/net/Officejet_Pro_L7500?zc=HP065193 mounted DO NOT REMOVE PHOTO CARD UNTIL YOU EXIT THIS PROGRAM warning: Photo card is write protected. Type 'help' for a list of commands. Type 'exit' to quit. pcard: / > ls \ Name Size Type dcim/ directory eos_digi.tal 0 B unknown/unknown 1 files, 0 B pcard: / > cd dcim |pcard: /dcim > ls | Name Size Type . directory .. directory 100eos5d/ directory 267canon/ directory 270canon/ directory 271canon/ directory 272canon/ directory 0 files, 0 B pcard: /dcim > cd 272canon -pcard: /dcim/272canon > ls \ Name Size Type . directory .. directory _mg_7201.jpg 3.1 MB image/jpeg ...........(some more files)................. _mg_7281.jpg 2.5 MB image/jpeg _mg_7282.jpg 2.5 MB image/jpeg 82 files, 241.6 MB (253377883) How can I acess it from nautilus or mount it as a filesystem? Note that this is similar to this other question: Can't get HP Officejet 6500 card reader to work. but actually there seemed to be no supported device here, while in my case I manage to access the memory card from hp-unload.

    Read the article

  • Use Network-Manager to Connect to a wifi Access Point on the command-line

    - by Stefano Palazzo
    I'd like to connect to a wireless access point from the command-line. ideally, I'd only need the name of the AP. But the hardware-address would work as well. I know I can use nmcli to connect to a managed network connection, but in my case, the access point may not be configured for Network-Manager yet (See the difference between the output of nm-tool and nmcli con). Example output of nmcli: Auto pwln 3a3d62b1-bbdf-4f76-b4d2-c211fd5cfb03 802-11-wireless [...] Wired Network aa586921-accf-4932-98c4-c873c310f08e 802-3-ethernet [...] Cisco-UDP Uni 7f94847b-04dc-40b7-9955-5246fb77cc65 vpn [...] T-mobile (D1) 867f345a-cbbf-4bd4-b883-a5e5ae0932f0 gsm [...] Example output of nm-tool: State: connected - Device: eth1 [Auto pwln] ---------------------------------------------------- [...] Wireless Access Points (* = current AP) *pwln: Infra, [...], Freq 2472 MHz, Rate 54 Mb/s, Strength 80 WPA WPA2 WLAN: Infra, [...], Freq 2422 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 [...] How do I connect to an access point that may or may not be known to NM? Extra: Finding out if the connection needs a pass-phrase, and submitting it on the command-line as well would be great too (that is to say It'd be nice if network-manager wouldn't pop open any keyring dialogues or errors on the gui)

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • How to do role-based access control for a franchise business?

    - by FreshCode
    I'm building the 2nd iteration of a web-based CRM+CMS for a franchise service business in ASP.NET MVC 2. I need to control access to each franchise's services based on the roles a user is assigned for that franchise. 4 examples: Receptionist should be able to book service jobs in for her "Atlantic Seaboard" franchise, but not do any reporting. Technician should be able to alter service jobs, but not modify invoices. Managers should be able to apply discount to invoices for jobs within their stores. Owner should be able to pull reports for any franchises he owns. Where should franchise-level access control fit in between the Data - Services - Web layer? If it belongs in my Controllers, how should I best implement it? Partial Schema Roles class int ID { get; set; } // primary key for Role string Name { get; set; } Partial Franchises class short ID { get; set; } // primary key for Franchise string Slug { get; set; } // unique key for URL access, eg /{franchise}/{job} string Name { get; set; } UserRoles mapping short FranchiseID; // related to franchises table Guid UserID; // related to Users table int RoleID; // related to Roles table DateTime ValidFrom; DateTime ValidUntil; Background I built the previous CRM in classic ASP and it runs the business well, but it's time for an upgrade to speed up workflow and leave less room for error. For the sake of proper testing and better separation between data and presentation, I decided to implement the repository pattern as seen in Rob Conery's MVC Storefront series. Controller Implementation Access Control with [Authorize] attribute If there was just one franchise involved, I could simply limit access to a controller action like so: [Authorize(Roles="Receptionist, Technician, Manager, Owner")] public ActionResult CreateJob(Job job) { ... } And since franchises don't just pop up over night, perhaps this is a strong case to use the new Areas feature in ASP.NET MVC 2? Or would this lead to duplicate Views? Controllers, URL Routing & Areas Assuming Areas aren't used, what would be the best way to determine which franchise's data is being accessed? I thought of this: {franchise}/{controller}/{action}/{id} or is it better to determine a job's franchise in a Details(...) action and limit a user's action with [Authorize]: {job}/{id}/{action}/{subaction} {invoice}/{id}/{action}/{subaction} which makes more sense if any user could potentially have access to more than one franchise without cluttering the URL with a {franchise} parameter. Any input is appreciated.

    Read the article

  • How to best implement Version Control for Web Development?

    - by Adam Taylor
    Version control systems are obviously important in development projects but there use in web development projects appears to be more complex, what with the requirement of having a web server to run all but the simplest of web applications. With that in mind, I have looked around and discovered a few different methods of using version control in web development projects: Provide each developer with a virtual machine which is a replication of the development server and have the developer run their working copy of the application in the virtual machine. Have each developer use a sub domain on the development server, e.g. john.project.com and checkout their working copy of the app to the directories the sub domain points to. Use the version control system to checkout code, make a change, commit the code and then check it on the development server (which points to the head of the repository). I can see a drawback of 1 being the added time required to create the virtual machines and ensure that the virtual machines are kept insync with the development server (also the need(?) to continuously change the developers host file to point at the virtual machine not the development server). I can see 2 possibly being a problem if absolute URLs are used within the site unless there is an easy way to update the configuration to use the new subdomains as well. 3 is the easiest to set up but is rather primitive and it will presumably become quite tedious for a developer to keep checking in the code after every time change. How have the users of stackoverflow used version control with web development projects and which method/workflow was most effective. Please also include extra methods I haven't thought of / read about.

    Read the article

  • How to access Java servlet running on my PC from outside ?

    - by Frank
    I used Netbeans6.7 to write a servlet, when it runs, it opens a browser window with this address : http://localhost:8080/My_App/Test_Servlet, I replaced the "localhost" with my IP address, now it looks like this : http://192.???.1.??:8080/My_App/Test_Servlet, but I tried to access it from another computer outside my home, it can't read anything, I wonder if I need to change Windows Fire Wall setting to allow outside traffic, it's a Paypal IPN app, so I call Paypal, they said they can't access : http://192.???.1.??:8080/My_App/Test_Servlet What on my side should I do to allow traffic from "paypal.com" to access "My_App/Test_Servlet" ? Frank

    Read the article

  • How do I lock the workstation from a windows service?

    - by Brad Mathews
    Hello, I need to lock the workstation from a windows service written in VB.Net. I am writing the app on Windows 7 but it needs to work under Vista and XP as well. User32 API LockWorkStation does not work as it requires an interactive desktop and I get return value of 0. I tried calling %windir%\System32\rundll32.exe user32.dll,LockWorkStation from both a Process and from Shell, but still nothing happens. Setting the service to interact with the desktop is a no-go as I am running the service under the admin account so it can do some other stuff that requires admin rights - like disabling the network, and you can only select the interact with desktop option if running under Local System Account. That would be secondary question - how to run another app with admin rights from a service running under Local System Account without bugging the user. I am writing an app to control my kids computer/internet access (which I plan to open source when done) so I need everything to happen as stealthily as possible. I have a UI that handles settings and status notifications in the taskbar, but that is easy to kill and thus defeat the locking. I could make another hidden Windows Forms app to handle the locking, but that just seems a rather inelegant solution. Better ideas anyone? Thanks! Brad

    Read the article

  • $.ajax not loading data data everytime from server

    - by Ted
    I have written a simple jQuery.ajax function which loads a user control from the server on click of a button. The first time I click the button, it goes to the server and gets me the user control. But each subsequent click of the same button does not goes to the server to fetch me the user control. Since my user control fetches data from db, I need to reload the user control everytime i hit the button. But if anyhow I get my user control to unload from the page, and re-click the button, it goes to the server and fetches me the user control. Here's the code: $("#btnLoad").click(function() { if ($(this).attr("value") == "Load Control") { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "loadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Unload Control"); } else { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "unloadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Load Control"); } }); Please let me know if there is any other way I can get my user control loaded from server everytime I click the button.

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • Data Access Layer in ASP.Net: Where do I create the Connection?

    - by atticae
    If I want to create a 3-Layer ASP.Net application (Presentation Layer, Business Layer, Data Access Layer), where is the best place to create the Connection objects? So far I used a helper class in my Presentation Layer to create an IDbCommand from the ConnectionString in the web.config on each page and passed it on to the DAL classes/methods. Now I am not so sure, if this part shouldn't also be included in the DAL somehow, because it obviously is part of the Data Access. The DAL is in a separately compilated project, so I dont have access to the web.config and cannot access the connection string (right?). What is the best practice here?

    Read the article

  • How to configure SVN access list for directory/repository ?

    - by abatishchev
    I have next SVN repositories structure running Apache 2.2 under Windows Server 2008: http://example.com/svn/ is targeted to e:\svn (root) http://example.com/svn/dir/ is targeted to e:\svn\dir (some directory with a number of repositories) http://example.com/svn/dir/repo/ is targeted to e:\svn\dir\repo (a repository itself) How to access list so group @foo had rw access to repo? I have next access list: [groups] @foo = user1, user2 [/] * = r [dir/repo:/] @foo = rw The last string doesn't work in any combination I tried

    Read the article

  • Access as a front-end to SQL Server - ADO vs DAO?

    - by webworm
    I have a project that will be using Access 2003 as the font-end and the data being stored in SQL Server. Access will connect to SQL Server via linked tables with all the database logic (stored procedures, views) within SQL Server. Given this setup, would it be better to use ADO or DAO within Access? Is it just a matter of preference or is one more suited to Access as a font-end and SQL Server as the data store? Especially when using linked tables. Thanks.

    Read the article

  • What's the best practice for handling system-specific information under version control?

    - by Joe
    I'm new to version control, so I apologize if there is a well-known solution to this. For this problem in particular, I'm using git, but I'm curious about how to deal with this for all version control systems. I'm developing a web application on a development server. I have defined the absolute path name to the web application (not the document root) in two places. On the production server, this path is different. I'm confused about how to deal with this. I could either: Reconfigure the development server to share the same path as the production Edit the two occurrences each time production is updated. I don't like #1 because I'd rather keep the application flexible for any future changes. I don't like #2 because if I start developing on a second development server with a third path, I would have to change this for every commit and update. What is the best way to handle this? I thought of: Using custom keywords and variable expansion (such as setting the property $PATH$ in the version control properties and having it expanded in all the files). Git doesn't support this because it would be a huge performance hit. Using post-update and pre-commit hooks. Possibly the likely solution for git, but every time I looked at the status, it would report the two files as being changed. Not really clean. Pulling the path from a config file outside of version control. Then I would have to have the config file in the same location on all servers. Might as well just have the same path to begin with. Is there an easy way to deal with this? Am I over thinking it?

    Read the article

  • Database advantages? Access, MySQL, msSQL, or any others?

    - by JimZ
    Dear all Stackoverflowers, I just started to learn programming and now I'm putting this question online based on a quote: no question is silly My work needs to develop a order system based on web, which wants a database system. Since using Excel for years as a general office user, I naturally turn this to Access. However, most people say Access is very limited comparing to MySQL or MSSQL, or any other more professional database system. But after developing some functions for my company's order system, I really find Access can fulfill my request. And I also tried MSSQL to develop, which I found it not quite convenient to use. I have searched in stackoverflow and find no general answer about my doubt. Now I am sincerely hoping some experienced and professional developers could clear my doubts. Now I'm listing some Access advantages, which I don't think other database system have. I hope you could help me also find these advantages in others. 1. Access is portable, I can just copy a xxx.accdb file to my company and continue with development. 2. Access is easy to generate helpful table, for example, it will automatically generate a field that can automatically count, could be used as primary key value. 3. it is more compatable with Excel, to display and filter data. 4. importantly, it nerely needs no environment to setup, just needs MS Office to be installed. ............others However, I also find some points that MSSQL is advantaged: 1. security reasons 2. easy to backup, ( just use BACKUP..... sql statement to do it) 3. can edit stored procedure to save some functions to database ...............others specifically, I wish some friends could tell me how to make other database portable? since I usually work both at home and in office. It's a headache to move MSSQL work to my office, since the version of MSSQL is not the same. Thank you all and best regards, :)

    Read the article

  • Is there a way to sync (two way) tables betwen a mysql server and a local MS Access?

    - by Kailen
    Help me figure out a solution to a (not so unique) problem. My research group has gps devices attached to migratory animals. Every once in a while, a research tech will be within range of an animal and will get the chance to download all the logged points. Each individual spits out a single dbf and new locations are just appended to the end (so the file is just cumulative). These data need to be shared among a research group. Everyone else (besides me) wants to use access, so they can make small edits and prefer that interface. They do not like using MySQL. The solution I came up with is: a) The person who downloads the file goes to a web page, enters animal ID into a form, chooses .dbf file and uploads to a mysql database on the server (I still have to write php code to read the dbf and write sql insert statements from it). b) Everyone syncs from their local access database to the server. (This is natively possible from access but very clunky). Is there a tool (preferably open source), that can compare a access table to mysql table and sync the two (both ways)? Alternatively, does anyone have a more elegant solution? The ultimate goal is to allow everyone to have access to the most current data on their computers using their preferred database app.

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • Resolving the Access is Denied Error in VSeWSS Deployments

    - by Damon
    Visual Studio Extensions for Windows SharePoint Services 1.3 (VSeWSS 1.3) tends to make my life easier unless I'm typing out the words that make up the VSeWSS acronym - really, what a mouthful.  But one of the problems that I routinely encounter are error messages when trying to deploy solutions.  These normally look something like the following: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) I tried a variety of steps to resolve this issue: Recycling the application pool Restarting IIS Closing Visual Studio Not detaching from the debugger until a request was fully completed Logging off and logging back into Windows etc. Nothing actually worked.  Some of these resolution attempts seemed to help keep the problem from happening quite as frequently, but I still have no idea what EXACTLY causes the problem and it would rear its ugly head from time to time.  Unfortunately, the only resolution I found that seemed to work was to reboot the machine . which is a crappy resolution. Finally sick enough of the problem to spend some time on it, I went on a search and tried to figure out if anyone else was having this issue.  People seem to suggest that turning off the Indexing Service on your machine helps resolve this problem.  I tried turning it off but I kept having issues.  Which was depressing.  Fortunately, I stumbled upon the resolution when I was looking through the services list.  If you encounter the issue, all you have to do is reset the World Wide Web Publishing Service.  I've had a 100% success rate so far with this approach.  I'm not sure if having the Indexing Service is part of the solution, but I've kept it disabled for the time being because I'm really sick of having to reboot my machine to deal with that error message. If you do VSeWSS development, you may also want to check out this blog post: VSeWSS 1.3 - Getting around the "Unable to load one or more of the requested types" Error

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >