Search Results

Search found 5516 results on 221 pages for 'scope identity'.

Page 173/221 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • In the Firing Line: The impact of project and portfolio performance on the CEO

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What are the primary measurements for rating CEO performance? For corporate boards, business analysts, investors, and the trade press the metrics they deploy are relatively binary in nature; what is being done to generate earnings, and what is being done to build and sustain high performance? As for the market, interest is primarily aroused when operational and financial performance falls outside planned commitments for the year. When organizations announce better than predicted results, they usually experience an immediate increase in share price. Likewise, poor results have an obviously negative impact on the share price and impact the role and tenure of the incumbent CEO. The danger for the CEO is that the risk of failure is ever present, ranging from manufacturing delays and supply chain issues to labor shortages and scope creep. This risk is enhanced by the involvement of secondary suppliers providing services critical to overall work schedules, and magnified further across a portfolio of programs and projects underway at any one time – and all set within a global context. All can impact planned return on investment and have an inevitable impact on the share price – the primary empirical measure of day-to-day performance. Read this complete complementary report, In the Firing Line and explore what is the direct link between the health of the portfolio and CEO performance. This report will provide an overview of the responsibility the CEO has for implementing and maintaining a culture of accountability, offer examples of some of the higher profile project failings in recent years, and detail the capabilities available to the CEO to mitigate the risks residing in their own portfolios. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Internet of Things Becoming Reality

    - by kristin.jellison
    The Internet of Things is not just on the radar—it’s becoming a reality. A globally connected continuum of devices and objects will unleash untold possibilities for businesses and the people they touch. But the “things” are only a small part of a much larger, integrated architecture. A great example of this comes from the healthcare industry. Imagine an expectant mother who needs to watch her blood pressure. She lives in a mountain village 100 miles away from medical attention. Luckily, she can use a small “wearable” device to monitor her status and wirelessly transmit the information to a healthcare hub in her village. Now, say the healthcare hub identifies that the expectant mother’s blood pressure is dangerously high. It sends a real-time alert to the patient’s wearable device, advising her to contact her doctor. It also pushes an alert with the patient’s historical data to the doctor’s tablet PC. He inserts a smart security card into the tablet to verify his identity. This ensures that only the right people have access to the patient’s data. Then, comparing the new data with the patient’s medical history, the doctor decides she needs urgent medical attention. GPS tracking devices on ambulances in the field identify and dispatch the closest one available. An alert also goes to the closest hospital with the necessary facilities. It sends real-time information on her condition directly from the ambulance. So when she arrives, they already have a treatment plan in place to ensure she gets the right care. The Internet of Things makes a huge difference for the patient. She receives personalized and responsive healthcare. But this technology also helps the businesses involved. The healthcare provider achieves a competitive advantage in its services. The hospital benefits from cost savings through more accurate treatment and better application of services. All of this, in turn, translates into savings on insurance claims. This is an ideal scenario for the Internet of Things—when all the devices integrate easily and when the relevant organizations have all the right systems in place. But in reality, that can be difficult to achieve. Core design principles are required to make the whole system work. Open standards allow these systems to talk to each other. Integrated security protects personal, financial, commercial and regulatory information. A reliable and highly available systems infrastructure is necessary to keep these systems running 24/7. If this system were just made up of separate components, it would be prohibitively complex and expensive for almost any organization. The solution is integration, and Oracle is leading the way. We’re developing converged solutions, not just from device to datacenter, but across devices, utilizing the Java platform, and through data acquisition and management, integration, analytics, security and decision-making. The Internet of Things (IoT) requires the predictable action and interaction of a potentially endless number of components. It’s in that convergence that the true value of the Internet of Things emerges. Partners who take the comprehensive view and choose to engage with the Internet of Things as a fully integrated platform stand to gain the most from the Internet of Things’ many opportunities. To discover what else Oracle is doing to connect the world, read about Oracle’s Internet of Things Platform. Learn how you can get involved as a partner by checking out the Oracle Java Knowledge Zone. Best regards, David Hicks

    Read the article

  • Intermittent internet connectivity

    - by Rob Oplawar
    UPDATED: I recently built a new computer and set it up to dual-boot Windows 7 and Ubuntu 11.10. In Windows, using the same hardware, my LAN connectivity is solid. In Ubuntu, however, my network interface periodically dies and resets itself; I'll have a solid connection for 30 seconds, and then it will go out for 30 seconds. When I tail the log: tail -f /var/log/kern.log I see "eth0 link up" messages appear periodically, corresponding with the return of connectivity. I posted the original question months ago, and misinterpreted what was going on. With a working Internet connection in Windows, I ignored the problem for some months. See my answer below for the solution (drivers). ORIGINAL POST In Ubuntu, although I maintain a solid connection to my LAN (pinging the router IP address consistently returns a good result), my internet connectivity drops in and out. When I continuously ping 74.125.227.18 (a google.com server), I get responses for a while, then I start getting "Destination Host Unreachable" for a while, then I get responses again. This happens consistently, dropping the connection for about 30 seconds out of every minute or two. Whether I configure my network via the network manager or via /etc/network/interfaces seems to make no difference. I configure with the following settings: address 192.168.1.101 network 192.168.1.0 gateway 192.168.1.99 (my router's IP address) netmask 255.255.255.0 (confirmed as the right netmask for the router) broadcast 192.168.1.255 (also confirmed with the router). ifconfig confirms that these settings are working: eth0 Link encap:Ethernet HWaddr 50:e5:49:40:da:a6 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe40:daa6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11557 errors:0 dropped:11557 overruns:0 frame:11557 TX packets:13117 errors:0 dropped:211 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9551488 (9.5 MB) TX bytes:1930952 (1.9 MB) Interrupt:41 Base address:0xa000 I get the same issue when I use automatic DHCP address settings, although I did confirm that there is no other machine on the network with the static IP address I want to use. As I said, the connection to the local network stays solid - I never have any trouble pinging 192.168.1.* - it's internet addresses that I intermittently cannot reach. It's not a DNS issue because pinging known IP addresses directly shows the same behavior. Also, I don't think it's a hardware issue, as I never have any internet connectivity problems on the same machine in Windows. The network hardware is built into the motherboard: Gigabyte Z68XP-UD3P. I managed to bring the OS fully up to date, according to the update manager, but it didn't fix the issue, and with my limited understanding of network architecture I'm at my wit's end. The only clue I can see is that ifconfig is reporting a lot of dropped packets, but I'm not sure what to do about it. UPDATE: It seems my problem is a little more generic than I described; now when I try pinging my router and google simultaneously, they both go unreachable at the same time. Running ifdown eth0 and then ifup eth0 brings it back temporarily; if I just wait it comes back after a couple of minutes. I'll broaden my search through intermittent network connectivity problems.

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • Are You a WebCenter Innovator?

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Calling all Oracle WebCenter Innovators: Submit your Nomination for the 2012 Innovation Awards Click here, to submit your nomination today Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 Are you doing something unique and innovative with Oracle Fusion Middleware? Submit a nomination today for the Oracle Fusion Middleware Innovation Awards. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco (September 30 - October 4th) and will be honored during a special event at OpenWorld. Categories include: Oracle Exalogic Cloud Application Foundation Service Integration (SOA) and BPM WebCenter Identity Management Data Integration Application Development Framework and Fusion Development Business Analytics (BI, EPM and Exalytics) To be considered for this award, complete the Oracle Fusion Middleware Innovation Awards nomination form and send to [email protected]. The deadline to submit a nomination is 5pm Pacific on July 17, 2012.

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Confused about modifying the sprint backlog during a sprint

    - by Maltiriel
    I've been reading a lot about scrum lately, and I've found what seem to me to be conflicting information about whether or not it's ok to change the sprint backlog during a sprint. The Wikipedia article on scrum says it's not ok, and various other articles say this as well. Also my Software Development professor taught the same thing during an overview of scrum. However, I read Scrum and XP from the Trenches and that describes a section for unplanned items on the taskboard. So then I looked up the Scrum Guide and it says that during the sprint "No changes are made that would affect the Sprint Goal" and in the discussion of the Sprint Goal "If the work turns out to be different than the Development Team expected, then they collaborate with the Product Owner to negotiate the scope of Sprint Backlog within the Sprint." It goes on to say in the discussion of the Sprint Backlog: The Sprint Backlog is a plan with enough detail that changes in progress can be understood in the Daily Scrum. The Development Team modifies Sprint Backlog throughout the Sprint, and the Sprint Backlog emerges during the Sprint. This emergence occurs as the Development Team works through the plan and learns more about the work needed to achieve the Sprint Goal. As new work is required, the Development Team adds it to the Sprint Backlog. As work is performed or completed, the estimated remaining work is updated. When elements of the plan are deemed unnecessary, they are removed. Only the Development Team can change its Sprint Backlog during a Sprint. The Sprint Backlog is a highly visible, real-time picture of the work that the Development Team plans to accomplish during the Sprint, and it belongs solely to the Development Team. So at this point I'm altogether confused. Thinking about it, it makes more sense to me to take the second approach. The individual, specific items in the backlog don't seem to me to be the most important thing, but rather the sprint goal, so not changing the sprint goal but being able to change the backlog makes sense. For instance if both the product owner and the team thought they were on the same page about a story, but as the sprint progressed they figured out there was a misunderstanding, it seems like it makes sense to change the tasks that make up that story accordingly. Or if there was some story or task that was forgotten about, but is required to reach the sprint goal, I would think it would be best to add the story or task to the backlog during the sprint. However, there are a lot of people who seem quite adamant that any change to the sprint backlog is not ok. Am I misunderstanding that position somehow? Are those folks defining the sprint backlog differently somehow? My understanding of the sprint backlog is that it consists of both the stories and the tasks they're broken down into. Anyway I would really appreciate input on this issue. I'm trying to figure out both what the idealistic scrum approach is to changing the sprint backlog during a sprint, and whether people who use scrum successfully for development allow changing the sprint backlog during a sprint.

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • Redundant Interconnect with Highly Available IP (HAIP) ??

    - by JaneZhang(???)
      ?11.2.0.2??,Oracle ?????Grid Infrastructure(GI)????Redundant Interconnect with Highly Available IP(HAIP).  ?11.2.0.2??,???????????OS?????????,??HAIP??,?????????????????????  ???GI????,??????????????????,??:   ???,HAIP???????169.254.*.*,????????????HAIP ???1?,???4?(???????),???????????  ??:$ crsctl stat res -t -init NAME           TARGET  STATE        SERVER   STATE_DETAILS Cluster Resources--------------------------------------------------------------------------------ora.cluster_interconnect.haip       1        ONLINE  ONLINE       node2                       #ifconfig -aeth1      Link encap:Ethernet  HWaddr 00:14:22:BD:59:DE  <=====????          inet addr:192.168.10.2  Bcast:192.168.10.255  Mask:255.255.255.0          inet6 addr: fe80::214:22ff:febd:59de/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:54297359 errors:0 dropped:0 overruns:0 frame:0          TX packets:58151488 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000           RX bytes:837602539 (798.8 MiB)  TX bytes:3809085161 (3.5 GiB)          Interrupt:169 eth1:1    Link encap:Ethernet  HWaddr 00:14:22:BD:59:DE  <=====????????          inet addr:169.254.185.195  Bcast:169.254.255.255  Mask:255.255.0.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          Interrupt:169 ???????HAIP??cluster interconnect: Cluster communication is configured to use the following interface(s) for this instance  169.254.185.195cluster interconnect IPC version:Oracle UDP/IP (generic)IPC Vendor 1 proto 2ASM ?????? HAIP ??cluster interconnect: Cluster communication is configured to use the following interface(s) for this instance  169.254.185.195cluster interconnect IPC version:Oracle UDP/IP (generic)IPC Vendor 1 proto 2  Oracle????ASM??????HAIP??????????????????????,????????????????????,???????????,????HAIP????????????????,???????????  HAIP ????????????,?????????????????   ??HAIP?????,???My Oracle Support Note ??1210883.1.

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • KnpLabs / DoctrineBehaviors Translatable - currentLocale = null

    - by Ruben
    Using the trait \Knp\DoctrineBehaviors\Model\Translatable\Translation inside an Entity, I see that the property currentLocale is never setted , so we always obtain the default locale ('en') in all the calls to $this->translate(). If I change this getDefaultLocale, all the translations are made correctly, so I think that is no problem with the fallback. I tried debug the currentLocaleCallable. I see that if I put a "var_dump ($this-container-get('request'));" in the contructor of currentLocaleCallable, the request have a locale to null. And outside the request has the correct locale.It seems that container is not in the scope: request , i don't know how can I get it to work I post an issue in github https://github.com/KnpLabs/DoctrineBehaviors/issues/71 EDITED This service is defined in vendor/knplabs/doctrine-behaviors/config/orm-services.yml and is: knp.doctrine_behaviors.reflection.class_analyzer: class: "%knp.doctrine_behaviors.reflection.class_analyzer.class%" public: false knp.doctrine_behaviors.translatable_listener: class: "%knp.doctrine_behaviors.translatable_listener.class%" public: false arguments: - "@knp.doctrine_behaviors.reflection.class_analyzer" - "%knp.doctrine_behaviors.reflection.is_recursive%" - "@knp.doctrine_behaviors.translatable_listener.current_locale_callable" tags: - { name: doctrine.event_subscriber } knp.doctrine_behaviors.translatable_listener.current_locale_callable: class: "%knp.doctrine_behaviors.translatable_listener.current_locale_callable.class%" arguments: - "@service_container" # lazy request resolution public: false EDIT 2: My composer.json "php": ">=5.3.3", "symfony/symfony": "2.3.*", "doctrine/orm": ">=2.2.3,<2.4-dev", "doctrine/doctrine-bundle": "1.2.*", "twig/extensions": "1.0.*", "symfony/assetic-bundle": "2.3.*", "symfony/swiftmailer-bundle": "2.3.*", "symfony/monolog-bundle": "2.3.*", "sensio/distribution-bundle": "2.3.*", "sensio/framework-extra-bundle": "2.3.*", "sensio/generator-bundle": "2.3.*", "incenteev/composer-parameter-handler": "~2.0", "friendsofsymfony/user-bundle": "1.3.*", "avalanche123/imagine-bundle": "v2.1", "raulfraile/ladybug-bundle": "~1.0", "genemu/form-bundle": "2.2.*", "friendsofsymfony/rest-bundle": "0.12.0", "stof/doctrine-extensions-bundle": "dev-master", "sonata-project/admin-bundle": "dev-master", "a2lix/translation-form-bundle": "1.*@dev", "sonata-project/user-bundle": "dev-master", "psliwa/pdf-bundle": "dev-master", "jms/serializer-bundle": "dev-master", "jms/di-extra-bundle": "dev-master", "knplabs/doctrine-behaviors": "dev-master", "sonata-project/doctrine-orm-admin-bundle": "dev-master", "knplabs/knp-paginator-bundle": "dev-master", "friendsofsymfony/jsrouting-bundle": "~1.1", "zendframework/zend-validator": ">=2.0.0-rc2", "zendframework/zend-barcode": ">=2.0.0-rc2"

    Read the article

  • NServiceBus and NHibernate - Message Handler and Transactions

    - by mattcodes
    From my understanding NServiceBus executes the Handle method of an IMessageHandler within a transaction, if an exception propagates out of this method, then NServiceBus will ensure the message is put back on the message queue (up X amount of times before error queue) etc.. so we have an atomic operation so to speak. Now when if I inside my NServiceBus Message Handle method I do something like this using(var trans = session.BeginTransaction()) { person.Age = 10; session.Update<Person>(person); trans.Commit() } using(var trans2 = session.BeginTransaction()) { person.Age = 20; session.Update<Person>(person); // throw new ApplicationException("Oh no"); trans2.Commit() } What is the effect of this on the transaction scope? Is trans1 now counted as a nested transaction in terms of its relationship with the Nservicebus transaction even though we have done nothing to marry them up? (if not how would one link onto the transaction of NServiceBus? Looking at the second block (trans2), if I uncomment the throw statement, will the NServiceBus transaction then rollback trans1 as well? In basic scenarios, say I dump the above into a console app, then trans1 is independent, commit, flushed and won't rollback. I'm trying to clarify what happens now we sit in someone else's transaction like NServiceBus? The above is just example code, im wouldnt be working directly with session, more like through a uow pattern.

    Read the article

  • Deploy ASP.NET MVC 2 to IIS 7.5 targeting .NET 3.5

    - by Agent_9191
    I created an ASP.NET MVC 2 application in Visual Studio 2008. I set the release build to go through the ASP.NET compiler to precompile all the views, minify Javascript and CSS, clean up the web.config, etc. Since the production deployment is going to an IIS6 server, I set up my pseudo-production deployment on my Windows 7 machine to have the application pool run in classic mode targeting the 2.0 runtime. I set up the extensionless handler in the web.config that's necessary and everything worked great. The problem came when I upgraded the solution to Visual Studio 2010. I'm still targeting the 3.5 framework, but now I'm using MSBuild 4.0 since that's what Visual Studio 2010 uses. Everything still compiles correctly because it runs fine under Cassini, but when I deploy it to the same location (same application pool, identity, etc) it now behaves differently. I still have the extensionless handler in the web.config, but now when I navigate to the root of the application it does directory browsing, and any routes that it had previously handled now come back as 404 errors being handled by the StaticFile handler in IIS. I'm at a loss for what changed and is causing the break. I have looked at this question, but I have already verified that all the prerequisite components are installed.

    Read the article

  • Error launching application after ClickOnce deployment - "An application for this deployment is alre

    - by digiduck
    As part of my continuous integration build the application is deployed as a ClickOnce application. This works great the first time, but when I try the launch the app after an update has been deployed I get the following error. An application for this deployment is already installed with a different application identity. If I run mage.exe -cc to clear the application cache for all ClickOnce apps then I can launch the application just fine. Has anyone run into this before? How can I fix this? Here are the steps in my build script that publish the ClickOnce application. ./tools/windows_sdk/mage.exe -New Application -Processor msil -ToFile "C:\temp\build\RoadrunnerTrap.exe.manifest" -Name "Roadrunner Trap" -Version 1.0.0.1 -FromDirectory "C:\temp\build" # artifacts from C:\temp\build\ are copied to \\server\publish\v1.0.0.1\ ./tools/windows_sdk/mage.exe -New Deployment -Processor msil -Install false -Publisher "Acme, Inc." -ProviderUrl "\\server\publish\RoadrunnerTrap.application" -Name "Roadrunner Trap" -AppManifest "\\server\publish\v1.0.0.1\RoadrunnerTrap.exe.manifest" -ToFile "\\server\publish\RoadrunnerTrap.application" Note that the version number does change with every deployment.

    Read the article

  • ASP.NET PowerShell Impersonation

    - by Ben
    I have developed an ASP.NET MVC Web Application to execute PowerShell scripts. I am using the VS web server and can execute scripts fine. However, a requirement is that users are able to execute scripts against AD to perform actions that their own user accounts are not allowed to do. Therefore I am using impersonation to switch the identity before creating the PowerShell runspace: Runspace runspace = RunspaceFactory.CreateRunspace(config); var currentuser = WindowsIdentity.GetCurrent().Name; if (runspace.RunspaceStateInfo.State == RunspaceState.BeforeOpen) { runspace.Open(); } I have tested using a domain admin account and I get the following exception when calling runspace.Open(): Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Requested registry access is not allowed. The web application is running in full trust and I have explicitly added the account I am using for impersonation to the local administrators group of the machine (even though the domain admins group was already there). I'm using advapi32.dll LogonUser call to perform the impersonation in a similar way to this post (http://blogs.msdn.com/webdav_101/archive/2008/09/25/howto-calling-exchange-powershell-from-an-impersonated-thead.aspx) Any help appreciated as this is a bit of a show stopper at the moment. Thanks Ben

    Read the article

  • InvalidOperationException when calling SaveChanges in .NET Entity framework

    - by Pär Björklund
    Hi, I'm trying to learn how to use the Entity framework but I've hit an issue I can't solve. What I'm doing is that I'm walking through a list of Movies that I have and inserts each one into a simple database. This is the code I'm using private void AddMovies(DirectoryInfo dir) { MovieEntities db = new MovieEntities(); foreach (DirectoryInfo d in dir.GetDirectories()) { Movie m = new Movie { Name = d.Name, Path = dir.FullName }; db.AddToMovies(movie); } db.SaveChanges(); } When I do this I get an exception at db.SaveChanges() that read. The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. I haven't been able to find out what's causing this issue. My database table contains three columns Id int autoincrement Name nchar(255) Path nchar(255) Update: I Checked my edmx file and the SSDL section have the StoreGeneratedPattern="Identity" as suggested. I also followed the blog post and tried to add ClientAutoGenerated="true" and StoreGenerated="true" in the CSDL as suggested there. This resulted in compile errors ( Error 5: The 'ClientAutoGenerated' attribute is not allowed.). Since the blog post is from 2006 and it has a link to a follow up post I assume it's been changed. However, I cannot read the followup post since it seems to require an msdn account.

    Read the article

  • AbstractMethodError on org.apache.xalan.processor.TransformerFactoryImpl

    - by JBristow
    With the following code: private Document transformDoc(Source source) throws TransformerException, IOException { Transformer xslTransformer = TransformerFactory.newInstance().newTransformer(new StreamSource(pdfTransformXslt.getInputStream())); xslTransformer.setParameter("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); xslTransformer.setParameter("http://xml.org/sax/features/validation", false); JDOMResult result = new JDOMResult(); xslTransformer.transform(source, result); return result.getDocument(); } I'm getting the following error: java.lang.AbstractMethodError: org.apache.xalan.processor.TransformerFactoryImpl.setFeature(Ljava/lang/String;Z)V Why is this? Here's my Maven dependency tree: ------------------------------------------------------------------------ Building mc-hub-batch task-segment: [dependency:tree] ------------------------------------------------------------------------ snapshot com.billmelater:mc-test-support:2.0.0.11-SNAPSHOT: checking for updates from repository.jboss.org [dependency:tree {execution: default-cli}] com.billmelater:mc-hub-batch:jar:2.0.0.11-SNAPSHOT +- com.billmelater:mc-hub-core:jar:2.0.0.11-SNAPSHOT:compile | +- commons-lang:commons-lang:jar:2.4:compile | +- commons-collections:commons-collections:jar:3.2.1:compile | +- commons-beanutils:commons-beanutils:jar:1.8.0:compile | +- commons-digester:commons-digester:jar:2.0:compile | | +- (commons-beanutils:commons-beanutils:jar:1.8.0:compile - omitted for duplicate) | | \- (commons-logging:commons-logging:jar:1.1.1:compile - version managed from 1.0.4; omitted for duplicate) | \- (org.springframework.batch:spring-batch-core:jar:2.0.2.RELEASE:compile - omitted for duplicate) +- com.billmelater:mc-test-support:jar:2.0.0.11-SNAPSHOT:test | +- (com.billmelater:mc-hub-core:jar:2.0.0.11-SNAPSHOT:test - omitted for duplicate) | +- (org.springframework:spring:jar:2.5.6:test - omitted for duplicate) | +- org.springframework:spring-jdbc:jar:2.5.6.SEC01:test | | +- (commons-logging:commons-logging:jar:1.1.1:test - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | +- (org.springframework:spring-context:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | +- (org.springframework:spring-core:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | \- (org.springframework:spring-tx:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | +- (org.dbunit:dbunit:jar:2.4.5:test - omitted for duplicate) | +- (log4j:log4j:jar:1.2.15:test - omitted for duplicate) | +- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.5.8; scope updated from test; omitted for duplicate) | +- (org.slf4j:slf4j-log4j12:jar:1.5.6:test - omitted for duplicate) | +- org.jboss.seam:jboss-seam:jar:2.2.0.GA:test | | +- xstream:xstream:jar:1.1.3:test | | +- (xpp3:xpp3_min:jar:1.1.3.4.O:compile - scope updated from test; omitted for duplicate) | | \- org.jboss.el:jboss-el:jar:1.0_02.CR4:test | +- (org.testng:testng:jar:jdk15:5.8:test - omitted for duplicate) | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:test - version managed from 3.3.0.SP1; omitted for duplicate) | +- org.hibernate:hibernate-entitymanager:jar:3.4.0.GA:test | | +- (org.hibernate:ejb3-persistence:jar:1.0.2.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-annotations:jar:3.4.0.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:test - version managed from 3.3.0.SP1; omitted for duplicate) | | +- (org.slf4j:slf4j-api:jar:1.5.6:test - version managed from 1.4.2; omitted for duplicate) | | +- (dom4j:dom4j:jar:1.6.1-jboss:test - version managed from 1.6.1; omitted for duplicate) | | +- (javax.transaction:jta:jar:1.0.1B:test - version managed from 1.1; omitted for duplicate) | | \- javassist:javassist:jar:3.4.GA:test | +- (org.hibernate:hibernate-validator:jar:3.1.0.GA:test - omitted for duplicate) | +- (org.apache.velocity:velocity:jar:1.6.2:test - omitted for duplicate) | \- (ojdbc:ojdbc:jar:14:test - omitted for duplicate) +- org.springframework:spring:jar:2.5.6:compile +- org.springframework.batch:spring-batch-core:jar:2.0.2.RELEASE:compile | +- org.springframework.batch:spring-batch-infrastructure:jar:2.0.2.RELEASE:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | | \- (stax:stax:jar:1.2.0:compile - omitted for duplicate) | +- org.aspectj:aspectjrt:jar:1.5.4:compile | +- org.aspectj:aspectjweaver:jar:1.5.4:compile | +- com.thoughtworks.xstream:xstream:jar:1.3:compile | | \- xpp3:xpp3_min:jar:1.1.4c:compile | +- org.codehaus.jettison:jettison:jar:1.0:compile | +- org.springframework:spring-aop:jar:2.5.6:compile | | +- aopalliance:aopalliance:jar:1.0:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-beans:jar:2.5.6:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-context:jar:2.5.6:compile | | +- (aopalliance:aopalliance:jar:1.0:compile - omitted for duplicate) | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-core:jar:2.5.6:compile | | \- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | +- org.springframework:spring-tx:jar:2.5.6:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | +- (org.springframework:spring-context:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | \- stax:stax:jar:1.2.0:compile | \- stax:stax-api:jar:1.0.1:compile +- commons-dbcp:commons-dbcp:jar:1.2.2:compile | \- commons-pool:commons-pool:jar:1.3:compile +- org.hibernate:hibernate-core:jar:3.3.2.GA:compile | +- antlr:antlr:jar:2.7.7:compile (version managed from 2.7.6) | +- dom4j:dom4j:jar:1.6.1-jboss:compile (version managed from 1.6.1) | +- javax.transaction:jta:jar:1.0.1B:compile (version managed from 1.1) | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- org.hibernate:hibernate-validator:jar:3.1.0.GA:compile | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:compile - version managed from 3.3.0.SP1; omitted for duplicate) | +- org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:compile | | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- org.hibernate:hibernate-annotations:jar:3.4.0.GA:compile | +- org.hibernate:ejb3-persistence:jar:1.0.2.GA:compile | +- (org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:compile - omitted for duplicate) | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:compile - version managed from 3.3.0.SP1; omitted for duplicate) | +- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) | \- (dom4j:dom4j:jar:1.6.1-jboss:compile - version managed from 1.6.1; omitted for duplicate) +- ojdbc:ojdbc:jar:14:compile +- org.slf4j:slf4j-api:jar:1.5.6:compile +- org.slf4j:slf4j-log4j12:jar:1.5.6:compile | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- log4j:log4j:jar:1.2.15:compile +- org.apache.velocity:velocity:jar:1.6.2:compile | +- (commons-collections:commons-collections:jar:3.2.1:compile - omitted for duplicate) | +- (commons-lang:commons-lang:jar:2.4:compile - omitted for duplicate) | \- oro:oro:jar:2.0.8:compile +- org.testng:testng:jar:jdk15:5.8:test +- org.dbunit:dbunit:jar:2.4.5:test | +- junit:junit:jar:4.7:test (version managed from 3.8.2) | +- (org.slf4j:slf4j-api:jar:1.5.6:test - version managed from 1.4.2; omitted for duplicate) | \- (commons-collections:commons-collections:jar:3.2.1:test - omitted for duplicate) +- hsqldb:hsqldb:jar:1.8.0.7:test +- jboss:javassist:jar:3.3.ga:provided +- org.jdom:jdom:jar:1.1:compile +- jaxen:jaxen:jar:1.1.1:provided +- org.apache.xmlgraphics:fop:jar:0.95:compile | +- (org.apache.xmlgraphics:xmlgraphics-commons:jar:1.3.1:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for cycle) | | +- org.apache.xmlgraphics:batik-anim:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-css:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-dom:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | | +- (xalan:xalan:jar:2.6.0:compile - omitted for duplicate) | | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-parser:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-util:jar:1.7:compile | | \- xml-apis:xml-apis-ext:jar:1.3.04:compile | +- org.apache.xmlgraphics:batik-bridge:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-anim:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for cycle) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for cycle) | | +- org.apache.xmlgraphics:batik-script:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-xml:jar:1.7:compile | | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- xalan:xalan:jar:2.6.0:compile | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-gvt:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for cycle) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-transcoder:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-svggen:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-extension:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-ext:jar:1.7:compile | +- commons-logging:commons-logging:jar:1.1.1:compile | +- commons-io:commons-io:jar:1.3.1:compile | \- org.apache.avalon.framework:avalon-framework-api:jar:4.3.1:compile +- org.apache.xmlgraphics:xmlgraphics-commons:jar:1.3.1:compile | +- (commons-io:commons-io:jar:1.3.1:compile - omitted for duplicate) | \- (commons-logging:commons-logging:jar:1.1.1:compile - version managed from 1.0.4; omitted for duplicate) +- org.easymock:easymock:jar:2.0:test \- org.easymock:easymockclassextension:jar:2.2:test +- (org.easymock:easymock:jar:2.2:test - omitted for conflict with 2.0) \- cglib:cglib-nodep:jar:2.2:test (version managed from 2.1_3) Can anyone tell me how to clear out intellij's classpath too?

    Read the article

  • CA2000 and disposal of WCF client

    - by Mayo
    There is plenty of information out there concerning WCF clients and the fact that you cannot simply rely on a using statement to dispose of the client. This is because the Close method can throw an exception (i.e. if the server hosting the service doesn't respond). I've done my best to implement something that adheres to the numerous suggestions out there. public void DoSomething() { MyServiceClient client = new MyServiceClient(); // from service reference try { client.DoSomething(); } finally { client.CloseProxy(); } } public static void CloseProxy(this ICommunicationObject proxy) { if (proxy == null) return; try { if (proxy.State != CommunicationState.Closed && proxy.State != CommunicationState.Faulted) { proxy.Close(); } else { proxy.Abort(); } } catch (CommunicationException) { proxy.Abort(); } catch (TimeoutException) { proxy.Abort(); } catch { proxy.Abort(); throw; } } This appears to be working as intended. However, when I run Code Analysis in Visual Studio 2010 I still get a CA2000 warning. CA2000 : Microsoft.Reliability : In method 'DoSomething()', call System.IDisposable.Dispose on object 'client' before all references to it are out of scope. Is there something I can do to my code to get rid of the warning or should I use SuppressMessage to hide this warning once I am comfortable that I am doing everything possible to be sure the client is disposed of? Related resources that I've found: http://www.theroks.com/2011/03/04/wcf-dispose-problem-with-using-statement/ http://www.codeproject.com/Articles/151755/Correct-WCF-Client-Proxy-Closing.aspx http://codeguru.earthweb.com/csharp/.net/net_general/tipstricks/article.php/c15941/

    Read the article

  • Windows Sharepoint Services - FullTextSqlQuery Document library Unable to find items created by SYST

    - by Ashok
    We have created an ASP.NET web app that upload files to WSS Doc Libary. The files get added under 'SYSTEM ACCOUNT' in the library. The FullTextSqlQuery class is used to search the document libary items. But it only searches files that has been uploaded by a windows user account like 'Administrator' and ignores the ones uploaded by 'SYSTEM ACCOUNT'. As a result the search results are empty even though we have the necessary data in the document library. What could be the reason for this? The code is given below: public static List GetListItemsFromFTSQuery(string searchText) { string docLibUrl = "http://localhost:6666/Articles%20Library/Forms/AllItems.aspx"; List items = new List(); DataTable retResults = new DataTable(); SPSecurity.RunWithElevatedPrivileges(delegate { using (SPSite site = new SPSite(docLibUrl)) { SPWeb CRsite = site.OpenWeb(); SPList ContRep = CRsite.GetListFromUrl(docLibUrl); FullTextSqlQuery fts = new FullTextSqlQuery(site); fts.QueryText = "SELECT Title,ContentType,Path FROM portal..scope() WHERE freetext('" + searchText + "') AND (CONTAINS(Path,'\"" + ContRep.RootFolder.ServerRelativeUrl + "\"'))"; fts.ResultTypes = ResultType.RelevantResults; fts.RowLimit = 300; if (SPSecurity.AuthenticationMode != System.Web.Configuration.AuthenticationMode.Windows) fts.AuthenticationType = QueryAuthenticationType.PluggableAuthenticatedQuery; else fts.AuthenticationType = QueryAuthenticationType.NtAuthenticatedQuery; ResultTableCollection rtc = fts.Execute(); if (rtc.Count > 0) { using ( ResultTable relevantResults = rtc[ResultType.RelevantResults]) retResults.Load(relevantResults, LoadOption.OverwriteChanges); foreach (DataRow row in retResults.Rows) { if (!row["Path"].ToString().EndsWith(".aspx")) //if (row["ContentType"].ToString() == "Item") { using ( SPSite lookupSite = new SPSite(row["Path"].ToString())) { using (SPWeb web = lookupSite.OpenWeb()) { SPFile file = web.GetFile(row["Path"].ToString()); items.Add(file.Item); } } } } } } //using ends here }); return items; }

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >