Search Results

Search found 12714 results on 509 pages for 'db schema'.

Page 316/509 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • ArchBeat Top 10 for November 18-24, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook page for the week of November 18-24, 2012. One-Stop Shop for over 200 On-Demand Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. Oracle SOA Suite 11g PS 5 introduces BPEL with conditional correlation for aggregation scenarios | Lucas Jellema An extensive, detailed technical post from Oracle ACE Director Lucas Jellema. Oracle Utilities Application Framework V4.2.0.0.0 Released | Anthony Shorten Principal Product Manager Anthony Shorten shares an overview of the changes implemented in the new release. Fault Handling and Prevention - Part 1 | Guido Schmutz and Ronald van Luttikhuizen In this technical article, part one of a four part series, Oracle ACE Directors Guido Schmutz and Ronald van Luttikhuizen guide you through an introduction to fault handling in a service-oriented environment using Oracle SOA Suite and Oracle Service Bus. Oracle BPM Process Accelerators and process excellence | Andrew Richards "Process Accelerators are ready-to-deploy solutions based on best practices to simplify process management requirements," says Capgemini's Andrew Richards. "They are considered to be 'product grade,' meaning they have been designed; engineered, documented and tested by Oracle themselves to a level that they can be deployed as-is for a solution to a problem or extended as appropriate for a particular scenario." Videos: Getting Started with Java Embedded | The Java Source Interested in Java Embedded? You'll want to check out these videos provided Tori Weildt, including interviews with Oracle's James Allen and Kevin Smith, recorded at ARM TechCon. JPA SQL and Fetching tuning ( EclipseLink ) | Edwin Biemond Oracle ACE Edwin Biemond's post illustrates how to "use the department and employee entity of the HR Oracle demo schema to explain the JPA options you have to control the SQL statements and the JPA relation Fetching." Devoxx 2012 Trip Report - clouds and sunshine | Markus Eisele Oracle ACE Director Markus Eisele shares an extensive and entertaining account of his experience at Devoxx 2012. Towards Ultra-Reusability for ADF - Adaptive Bindings | Duncan Mills "The task flow mechanism embodies one of the key value propositions of the ADF Framework," says Duncan Mills. "However, what if we could do more? How could we make task flows even more re-usable than they are today?" As you might expect, Duncan has answers for those questions. Java Specification Requests in Numbers | Markus Eisele Oracle ACE Director Markus Eisele shares some interesting data culled from the Java Community Process site. Thought for the Day "You can't have great software without a great team, and most software teams behave like dysfunctional families." — Jim McCarthy Source: SoftwareQuotes.com

    Read the article

  • Filtering option list values based on security in UCM

    - by kyle.hatlestad
    Fellow UCM blog writer John Sim recently posted a comment asking about filtering values based on the user's security. I had never dug into that detail before, but thought I would take a look. It ended up being tricker then I originally thought and required a bit of insider knowledge, so I thought I would share. The first step is to create the option list table in Configuration Manager. You want to define the column for the option list value and any other columns desired. You then want to have a column which will store the security attribute to apply to the option list value. In this example, we'll name the column 'dGroupName'. Next step is to create a View based on the new table. For the Internal and Visible column, you can select the option list column name. Then click on the Security tab, uncheck the 'Publish view data' checkbox and select the 'Use standard document security' radio button. Click on the 'Edit Values...' button and add the values for the option list. In the dGroupName field, enter the Security Group (or Account if you use Accounts for security) to apply to that value. Create the custom metadata field and apply the View just created. The next step requires file system access to the server. Open the file [ucm directory]\data\schema\views\[view name].hda in a text editor. Below the line '@Properties LocalData', add the line: schSecurityImplementorColumnMap=dGroupName:dSecurityGroup The 'dGroupName' value designates the column in the table which stores the security value. 'dSecurityGroup' indicates the type of security to check against. It would be 'dDocAccount' if using Accounts. Save the file and restart UCM. Now when a user goes to the check-in page, they will only see the options for which they have read and write privileges to the associated Security Group. And on the Search page, they will see the options for which they have just read access. One thing to note is if a value that a user normally can't view on Check-in or Search is applied to a document, but the document is viewable by the user, the user will be able to see the value on the Content Information screen.

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • Using ASP.NET Membership Provider with an ACL

    - by geekrutherford
    Up until recently one of my applications has used the membership provider within ASP.NET exclusively. However, it has been proposed that while the currently defined roles are beneficial, security needs to be more granular to restrict both access to certain pages and functionality present within a given page.   Unfortunately, the role based security ASP.NET gives you out of the box falls down in this area. This is not due to a lack of foresight by Microsoft, but rather it was simply not designed for implementing both role based security and any inherent ACL you may define within these roles. Mind you some would say an ACL is independent of the role to which a user belongs and is assigned to the user directly.   The application mentioned here has it's own User object (which encapsulates the membership provider user object as a property) and SQL Server table to store extended information not present in the aspnet_users table. While I could have modified the aspnet membership schema to suit the applications needs, it seemed smarter to simply create a separate table with a foreign key back to the aspnet_users table.   Since I have a separate object to store extended user information, I simply created an ACL object and expose it as a property of my user object.   This is all well and good, but it does not help in regards to the SiteMapProvider and restricting access at the page level based on the users ACL.   The straightforward answer would be to develop some code within the databound event for the menu that checks the page title and has hardcoded logic that dictates a user must have certain permissions turned on. The problem with this approach is that it's HARDCODED!!! If you need to change access to a page you'd need to do a build and go through your normal deployment process....ugh!!!   An alternative method, albeit not perfect, is to utilize the resourceKey property on the SiteMapNodes in the SiteMap file with the name of the required permission to view the page. Within the databound event for your menu you iterate the SiteMapNodes in the menus SiteMapProvider looking for a match at the page level based on title. When a match is detected, you have a switch/case on the SiteMapNodes resourceKey (the name of the ACL permission required). The case for the resourceKey ensures the users ACL permission is turned on and viola!!!   This is noteably not perfect in that it is using the resourceKey in a manner other than intended.  Since the application is not localized, using it in the manner described it not an issue.   Below is a sample SiteMap file with the resourceKey used as the ACL permission identifier:     Below is the ItemDataBound event. This application uses the Telerik Menu control:

    Read the article

  • SQL installation scripts for WebCenter Content 11g

    - by KJH
    As part of the installation of WebCenter Content 11g (UCM or URM), one of the main functions is to run the Repository Creation Utility (RCU) to establish the database schema and tables.   This is pretty helpful because it runs all the scripts you need to have without having to manually set anything up in the database.   In UCM 10g and earlier, the installation  itself would establish the database tables if you wanted it to.  Otherwise, the SQL scripts were available to be run independently ahead of time.  For DBAs who wanted to understand what was being done to the database for the application, this was helpful for them.  But in 11g, that is all masked now in RCU.  You don't get to see the scripts at all as part of it's establishing the tables.  But if you comb through the directories for RCU, you can track them down.  They are in the  /rcuHome/rcu/integration/contentserver11/sql/ directories.  And to understand the order in which they are run, you can open up the /rcuHome/rcu/integration/contentserver11/contentserver11.xml file and see how they are run there.  The order in which they are run are: contentserverrole.sql contentserveruser.sql intradoc.sql workflow.sql formats.sql users.sql default.sql contentprocedures.sql  If you are installing WebCenter Records (URM), it will run some additional scripts between the formats.sql and users.sql : MetadataSet.sql UIEnhancements.sql RecordsManagement.sql RecordsManagement_default.sql ClassifiedEnhancements.sql ClassifiedEnhancements_default.sql In addition to the scripts being available within the RCU install directories, they are also available from within the Content Server UI.  If you go to Administration -> DataStoreDesign SQL Generation, this page can allow you to download these various SQL scripts.    From here, you can select your particular database type and which components to include.  Several components make changes dynamically to the database when they are enabled, so these scripts give you a way to inspect what is being run during that startup time.  Once selected, click Generate and you now can either view or download the scripts from the Actions menu. DISCLAIMER:  Installations are ONLY supported when done with the Repository Creation Utility.  These scripts are for reference only and not supported to be run manually.

    Read the article

  • The most dangerous SQL Script in the world!

    - by DrJohn
    In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects. Why start with a clean database? During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result! Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go". My universal "drop all" SQL Script To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip By using the database catalog views, the script finds and drops all of the following database objects: Foreign key relationships Stored procedures Triggers Database triggers Views Tables Functions Partition schemes Partition functions XML Schema Collections Schemas Types Service broker services Service broker queues Service broker contracts Service broker message types SQLCLR assemblies There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions! Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1. Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file. Good luck and be careful!!

    Read the article

  • EJB Named Criteria - Apply bind variable in Backingbean

    - by Deepak Siddappa
    EJB Named criteria are predefined and reusable where-clause definitions that are dynamically applied to a ViewObject query. Here we often use to filter the ViewObject SQL statement query based on Where Clause conditions.Take a scenario where we need to filter the SQL statements query based on Where Clause conditions, instead of playing with SQL statements use the EJB Named Criteria which is supported by default in ADF and set the Bind Variable parameter at run time.You can download the sample workspace from here [Runs with Oracle JDeveloper 11.1.2.0.0 (11g R2) + HR Schema] Implementation StepsCreate Java EE Web Application with entity based on Employees table, then create a session bean and data control for the session bean.Open the DataControls.dcx file and create sparse xml for as shown below. In sparse xml navigate to Named criteria tab -> Bind Variable section, create binding variable deptId. Now create a named criteria and map the query attributes to the bind variable. In the ViewController create index.jspx page, from data control palette drop employeesFindAll->Named Criteria->EmployeesCriteria->Table as ADF Read-Only Filtered Table and create the backingBean as "IndexBean".Open the index.jspx page and remove the "filterModel" binding from the table, add <af:inputText />, command button and bind them to backingBean. For command button create the actionListener as "applyEmpCriteria" and add below code to the file. public void applyEmpCriteria(ActionEvent actionEvent) { DCIteratorBinding dc = (DCIteratorBinding)evaluteEL("#{bindings.employeesFindAllIterator}"); ViewObject vo = dc.getViewObject(); vo.applyViewCriteria(vo.getViewCriteriaManager().getViewCriteria("EmployeesCriteria")); vo.ensureVariableManager().setVariableValue("deptId", this.getDeptId().getValue()); vo.executeQuery(); } /** * Programmtic evaluation of EL * * @param el EL to evalaute * @return Result of the evalutaion */ public Object evaluteEL(String el) { FacesContext fctx = FacesContext.getCurrentInstance(); ELContext elContext = fctx.getELContext(); Application app = fctx.getApplication(); ExpressionFactory expFactory = app.getExpressionFactory(); ValueExpression valExp = expFactory.createValueExpression(elContext, el, Object.class); return valExp.getValue(elContext); } Run the index.jspx page, enter departmentId value as 90 and click in ApplyEmpCriteria button. Now the bind variable for the Named criteria will be applied at runtime in the backing bean and it will re-execute ViewObject query to filter based on where clause condition.

    Read the article

  • Do you know about the Visual Studio 2010 Database Projects Guidance?

    - by Martin Hinshelwood
    Early on in the Team System (now Visual Studio ALM) cycle a new product surfaced within Team System that was affectionately called “Data Dude”, but had the more formal name of “Visual Studio 2005 Team Edition for Database Professionals”. The purpose of this product was to try and make the database a “first class citizen” in the development world. Those that started using Visual Studio 2005 Team Edition for Database Professionals (Data Dude) loved it, but everyone else did not get it. The capabilities were a little patchy, but the one thing it did bring to the party was the ability to put your database schema under source control. This was revolutionary as previously your DBA sat as far away from the team as possible, and usually in a dark cupboard, now they could partake of all the goodness of Version Control, Work Item Tracking and automated builds. The problem was that the understanding required to manage these projects was very different to that needed previously. Then the Visual Studio ALM Rangers got a hold of it…and produced some of the best guidance available. Figure: Download the guidance from http://vsdatabaseguide.codeplex.com/ This guidance discusses scenarios and approaches of using the Database Projects in Visual Studio 2010 to help you use the tools more effectively and maximize their value to your organization This guidance is focused on these five areas: Solution and Project Management Source Code Control and Configuration Management Integrating External Changes with the Project System Build and Deployment Automation with Visual Studio Database Projects Database Testing and Deployment Verification Each of these areas has common guidance, usage scenarios, hands on labs, and lessons learned from real world engagements and the community discussions.   The guidance is broken down into three packages: Guidance documentation Hands-on-lab (HOL) documentation note: The documentation is available in XPS-only format packages or complete XPS,PDF,DOCX format packages HOL Package If you need assistance and no one else can help, then you may need to call the Visual Studio ALM Rangers. The Visual Studio ALM Rangers have the mission to provide out of band solutions for missing features or guidance. They are supported by Microsoft Product Group, Microsoft Consulting Services, Microsoft Most Valued Professionals (MVPs) and technical specialists from technology communities around the globe, giving you a real-world view from the field, where the technology has been tested and used. For more information on the Rangers please visit http://msdn.microsoft.com/en-us/vstudio/ee358786.aspx and for more a list of other Rangers projects please see http://msdn.microsoft.com/en-us/vstudio/ee358787.aspx.

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

  • Motivating yourself to actually write the code after you've designed something

    - by dpb
    Does it happen only to me or is this familiar to you too? It's like this: You have to create something; a module, a feature, an entire application... whatever. It is something interesting that you have never done before, it is challenging. So you start to think how you are going to do it. You draw some sketches. You write some prototypes to test your ideas. You are putting different pieces together to get the complete view. You finally end up with a design that you like, something that is simple, clear to everybody, easy maintainable... you name it. You covered every base, you thought of everything. You know that you are going to have this class and that file and that database schema. Configure this here, adapt this other thingy there etc. But now, after everything is settled, you have to sit down and actually write the code for it. And is not challenging anymore.... Been there, done that! Writing the code now is just "formalities" and makes it look like re-iterating what you've just finished. At my previous job I sometimes got away with it because someone else did the coding based on my specifications, but at my new gig I'm in charge of the entire process so I have to do this too ('cause I get payed to do it). But I have a pet project I'm working on at home, after work and there is just me and no one is paying me to do it. I do the creative work and then when time comes to write it down I just don't feel like it (lets browse the web a little, see what's new on P.SE, on SO etc). I just want to move to the next challenging thing, and then to the next, and the next... Does this happen to you too? How do you deal with it? How do you convince yourself to go in and write the freaking code? I'll take any answer.

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • June 2013 release of SSDT contains a minor bug that you should be aware of

    - by jamiet
    I have discovered what seems, to me, like a bug in the June 2013 release of SSDT and given the problems that it created yesterday on my current gig I thought it prudent to write this blog post to inform people of it. I’ve built a very simple SSDT project to reproduce the problem that has just two tables, [Table1] and [Table2], and also a procedure [Procedure1]: The two tables have exactly the same definition, both a have a single column called [Id] of type integer. CREATE TABLE [dbo].[Table1] (     [Id] INT NOT NULL PRIMARY KEY ) My stored procedure simply joins the two together, orders them by the column used in the join predicate, and returns the results: CREATE PROCEDURE [dbo].[Procedure1] AS     SELECT t1.*     FROM    Table1 t1     INNER JOIN Table2 t2         ON    t1.Id = t2.Id     ORDER BY Id Now if I create those three objects manually and then execute the stored procedure, it works fine: So we know that the code works. Unfortunately, SSDT thinks that there is an error here: The text of that error is: Procedure: [dbo].[Procedure1] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Table1].[Id] or [dbo].[Table2].[Id]. Its complaining that the [Id] field in the ORDER BY clause is ambiguous. Now you may well be thinking at this point “OK, just stick a table alias into the ORDER BY predicate and everything will be fine!” Well that’s true, but there’s a bigger problem here. One of the developers at my current client installed this drop of SSDT and all of a sudden all the builds started failing on his machine – he had errors left right and centre because, as it transpires, we have a fair bit of code that exhibits this scenario.  Worse, previous installations of SSDT do not flag this code as erroneous and therein lies the rub. We immediately had a mass panic where we had to run around the department to our developers (of which there are many) ensuring that none of them should upgrade their SSDT installation if they wanted to carry on being productive for the rest of the day. Also bear in mind that as soon as a new drop of SSDT comes out then the previous version is instantly unavailable so rolling back is going to be impossible unless you have created an administrative install of SSDT for that previous version. Just thought you should know! In the grand schema of things this isn’t a big deal as the bug can be worked around with a simple code modification but forewarned is forearmed so they say! Last thing to say, if you want to know which version of SSDT you are running check my blog post Which version of SSDT Database Projects do I have installed? @Jamiet

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • Fetching e-mails into Redmine via IMAP

    - by Danilo Bargen
    I'm trying to fetch e-mails into Redmine via IMAP. The e-mails I'm generating look like this: FooBar Ltd 123456 http://example.com/Foobar-Ltd-123456.html Project: backend Tracker: Dataerror Beschreibung: This is the description =========================== CLIENT_IP: 192.168.1.215 HTTP_USER_AGENT: mozilla/asdfjköl I try to fetch them into Redmine via this command: rake -f /var/www/projects/redmine/Rakefile redmine:email:receive_imap \ RAILS_ENV="production" host=example.com port=993 ssl=true username=redmine \ password=1234 project=myproject tracker=other \ allow_override=project,tracker,category,priority \ move_on_success=read move_on_failure=failed But the e-mails get moved into the failed folder. I had this setup running some time ago with a different e-mail generator but pretty much the same template, and I can't figure out why it's not working. The permissions seem to be OK too. In order to further debug this issue, I need some logfiles. Are there any logfiles written by this command? Or are there any other suggestions to solve this issue? My environment: danilo@jabba:/var/www/projects/redmine$ RAILS_ENV=production script/about About your application's environment Ruby version 1.8.7 (i486-linux) RubyGems version 1.3.5 Rack version 1.0 Rails version 2.3.5 Active Record version 2.3.5 Active Resource version 2.3.5 Action Mailer version 2.3.5 Active Support version 2.3.5 Application root /var/www/projects/redmine Environment production Database adapter mysql Database schema version 20100819172912

    Read the article

  • Unable to connect java webservie to android

    - by nag prakash
    This is my android activity. Please help me out. I will send the project completely if you can drop your mail id. package prakash.ws.connectsql; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapPrimitive; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.AndroidHttpTransport; import android.os.Bundle; import android.app.Activity; import android.widget.EditText; import android.widget.TextView; public class MainActivity extends Activity { private static final String Soap_Action="http://testws.ws.prakash/testws"; private static final String Method_Name="testws"; private static final String Name_Space="http://testws.ws.prakash/"; private static final String URI="http://localhost:8045/testws/services/Testws?wsdl"; EditText ET; TextView Tv; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Packeting the request SoapObject request=new SoapObject(Name_Space,Method_Name); // pass the parameters to the method.If it has one request.addProperty("name", ET.getText().toString()); //passing the entire request to the envelope SoapSerializationEnvelope soapEnvelope=new SoapSerializationEnvelope(SoapEnvelope.VER11); soapEnvelope.setOutputSoapObject(request); //transporting envelope AndroidHttpTransport aht=new AndroidHttpTransport(URI); try{ aht.call(Soap_Action, soapEnvelope); @SuppressWarnings("deprecation") SoapPrimitive resultString=(SoapPrimitive) soapEnvelope.getResult(); Tv.setText(resultString.toString()); }catch(Exception e) { Tv.setText("error"); } } } This XML file does not appear to have any style information associated with it. The document tree is shown below. <wsdl:definitions xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:ns1="http://org.apache.axis2/xsd" xmlns:ns="http://testws.ws.prakash" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" targetNamespace="http://testws.ws.prakash"> <wsdl:documentation>Please Type your service description here</wsdl:documentation> <wsdl:types> <xs:schema attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://testws.ws.prakash"> <xs:element name="testws"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="name" nillable="true" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="testwsResponse"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="return" nillable="true" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> </wsdl:types> <wsdl:message name="testwsRequest"> <wsdl:part name="parameters" element="ns:testws"/> </wsdl:message> <wsdl:message name="testwsResponse"> <wsdl:part name="parameters" element="ns:testwsResponse"/> </wsdl:message> <wsdl:portType name="TestwsPortType"> <wsdl:operation name="testws"> <wsdl:input message="ns:testwsRequest" wsaw:Action="urn:testws"/> <wsdl:output message="ns:testwsResponse" wsaw:Action="urn:testwsResponse"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name="TestwsSoap11Binding" type="ns:TestwsPortType"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <wsdl:operation name="testws"> <soap:operation soapAction="urn:testws" style="document"/> <wsdl:input> <soap:body use="literal"/> </wsdl:input> <wsdl:output> <soap:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="TestwsSoap12Binding" type="ns:TestwsPortType"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <wsdl:operation name="testws"> <soap12:operation soapAction="urn:testws" style="document"/> <wsdl:input> <soap12:body use="literal"/> </wsdl:input> <wsdl:output> <soap12:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="TestwsHttpBinding" type="ns:TestwsPortType"> <http:binding verb="POST"/> <wsdl:operation name="testws"> <http:operation location="testws"/> <wsdl:input> <mime:content type="text/xml" part="parameters"/> </wsdl:input> <wsdl:output> <mime:content type="text/xml" part="parameters"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="Testws"> <wsdl:port name="TestwsHttpSoap11Endpoint" binding="ns:TestwsSoap11Binding"> <soap:address location="http://localhost:8045/testws/services/Testws.TestwsHttpSoap11Endpoint/"/> </wsdl:port> <wsdl:port name="TestwsHttpSoap12Endpoint" binding="ns:TestwsSoap12Binding"> <soap12:address location="http://localhost:8045/testws/services/Testws.TestwsHttpSoap12Endpoint/"/> </wsdl:port> <wsdl:port name="TestwsHttpEndpoint" binding="ns:TestwsHttpBinding"> <http:address location="http://localhost:8045/testws/services/Testws.TestwsHttpEndpoint/"/> </wsdl:port> </wsdl:service> </wsdl:definitions> this web service is running fine in the server. Manifest File I have added the internet Permission. Now this is the error in the logcat. 07-04 21:31:00.757: E/dalvikvm(375): Could not find class 'org.ksoap2.serialization.SoapObject', referenced from method prakash.ws.connectsql.MainActivity.onCreate 07-04 21:31:00.757: W/dalvikvm(375): VFY: unable to resolve new-instance 481 (Lorg/ksoap2/serialization/SoapObject;) in Lprakash/ws/connectsql/MainActivity; 07-04 21:31:00.757: D/dalvikvm(375): VFY: replacing opcode 0x22 at 0x0008 07-04 21:31:00.757: D/dalvikvm(375): VFY: dead code 0x000a-004e in Lprakash/ws/connectsql/MainActivity;.onCreate (Landroid/os/Bundle;)V 07-04 21:31:00.937: D/AndroidRuntime(375): Shutting down VM 07-04 21:31:00.937: W/dalvikvm(375): threadid=1: thread exiting with uncaught exception (group=0x40015560) 07-04 21:31:00.957: E/AndroidRuntime(375): FATAL EXCEPTION: main 07-04 21:31:00.957: E/AndroidRuntime(375): java.lang.NoClassDefFoundError: org.ksoap2.serialization.SoapObject 07-04 21:31:00.957: E/AndroidRuntime(375): at prakash.ws.connectsql.MainActivity.onCreate(MainActivity.java:30) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1611) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1663) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.app.ActivityThread.access$1500(ActivityThread.java:117) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:931) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.os.Handler.dispatchMessage(Handler.java:99) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.os.Looper.loop(Looper.java:123) 07-04 21:31:00.957: E/AndroidRuntime(375): at android.app.ActivityThread.main(ActivityThread.java:3683) 07-04 21:31:00.957: E/AndroidRuntime(375): at java.lang.reflect.Method.invokeNative(Native Method) 07-04 21:31:00.957: E/AndroidRuntime(375): at java.lang.reflect.Method.invoke(Method.java:507) 07-04 21:31:00.957: E/AndroidRuntime(375): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 07-04 21:31:00.957: E/AndroidRuntime(375): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 07-04 21:31:00.957: E/AndroidRuntime(375): at dalvik.system.NativeStart.main(Native Method) 07-04 21:31:05.307: I/Process(375): Sending signal. PID: 375 SIG: 9

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • snort with barnyard2 not working on Fedora 12

    - by aHunter
    Has anyone come across this error with barnyard2 and snort? --== Initializing Barnyard2 ==-- Initializing Input Plugins! Initializing Output Plugins! Parsing config file "/etc/snort/barnyard2.conf" Log directory = /var/log/barnyard2 database: compiled support for (mysql) database: configured to use mysql database: schema version = 107 database: host = localhost database: user = test database: database name = snort database: sensor name = localhost:eth0 database: sensor id = 1 database: data encoding = hex database: detail level = full database: ignore_bpf = no database: using the "log" facility --== Initialization Complete ==-- ______ -*> Barnyard2 <*- / ,,_ \ Version 2.1.8 (Build 251) |o" )~| By the SecurixLive.com Team: http://www.securixlive.com/about.php + '''' + (C) Copyright 2008-2010 SecurixLive. Snort by Martin Roesch & The Snort Team: http://www.snort.org/team.html (C) Copyright 1998-2007 Sourcefire Inc., et al. WARNING: Ignoring corrupt/truncated waldofile '/var/log/snort/barnyard.waldo' Opened spool file '/var/log/snort/snort.log.1282004944' ERROR: Unknown record type read: 104 Fatal Error, Quitting.. Snort seems to be working correctly as I have managed to get logs via syslog but when I try to use the barnyard config via Unified2 it is not working. Presumably because of the above error. Thanks in advance.

    Read the article

  • NAT, iptables and problematic ports

    - by Rajie
    I am building a small office network with virtual machines. My schema is this: Computer A: gateway, ip 1.1.1.1, iptables used for NAT [eth0=public internet dhcp, dhcp; eth1=gateway] Computer B: client, ip 1.1.1.2, using gateway from Computer A. NAT is working, and Computer B can access the internet using the A's gateway. I redirected some incoming ports from A to B (for instance, if A receives a request to port 80, it goes automatically to Computer B's Apache). The thing is that I do not really understand how to open/close ports for Computer B from Computer A. I know how to close a port: iptables -A INPUT -p tcp --dport 80 -j DROP And it will refuse all incoming (not output) connections to port 80. However, this works for main interface eth0. I tried to, for instance, drop ingoing and outgoing connections for Computer B, port 80: iptables -A FORWARD -i eth1 -o eth0 -p tcp --dport 80 -j DROP iptables -A FORWARD -i eth0 -o eth1 -p tcp --dport 80 -j DROP But it does not work. And I cannot figure out what I am doing wrong. Any clue?

    Read the article

  • How many users can be in a AD LDS group?

    - by ixe013
    Microsoft published the recommended maximum limits for users in an Active Directory group. It basically says : Starting with Windows Server 2003, the ability to replicate discrete changes to linked multivalued properties was introduced as a technology called Linked Value Replication (LVR). and This allows the number of group memberships to exceed the former recommended limit of 5,000 for Windows 2000 or Windows Server 2003 at a forest functional level of Windows 2000. Given the replication meta data below, can anybody tell me what is the maximum number of users a AD-LDS group can hold ? Getting 'CN=Member,CN=Schema,CN=Configuration,CN={67B333FE-ADB4-430D-AAEE-D4CCE4B98A2E}' metadata... 23 entries. AttID Ver Loc.USN Originating DSA Org.USN Org.Time/Date ===== === ======= =============== ======= ============= 0 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 3 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20001 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20002 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 2001e 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20020 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20021 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20032 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200a9 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200c2 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200da 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200e2 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200e7 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20119 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 2014e 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 201cc 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90001 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90094 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90095 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 900aa 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90177 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 9027f 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 9030e 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >