Search Results

Search found 26214 results on 1049 pages for 'farm solution'.

Page 603/1049 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Get timestamp from Authenticode Signed files in .NET

    - by SlavaGu
    We need to verify that binary files are signed properly with digital signature (Authenticode). This can be achieved with signtool.exe pretty easily. However, we need an automatic way that also verifies signer name and timestamp. This is doable in native C++ with CryptQueryObject() API as shown in this wonderful sample: How To Get Information from Authenticode Signed Executables However we live in a managed world :) hence looking for C# solution to the same problem. Straight approach would be to pInvoke Crypt32.dll and all is done. But there is similar managed API in System.Security.Cryptography.X509Certificates Namespace. X509Certificate2 Class seems to provide some information but no timestamp. Now we came to the original question how can we get that timestamp of a digital signature in C Sharp?

    Read the article

  • Colour Name to RGB/Hex/HSL/HSV etc

    - by Abs
    Hello all, I have come across this great function/command. Colour to RGB, you can do this: col2rgb("peachpuff") //returns hex It will return one hex value. I want to extend this using Perl, Python or PHP but I want to be able to pass in, for example, "yellow" and the function returns all types of yellows - their hex/rgb/?/etc value. I already have a quick solution implemented, which involves mapping colour names to hex values but now I want to get more precise and use some formulas etc to determine what's what. However, as usual, I don't have a clue on how to do this! So I appreciate any implementation advice on how to do this. Thanks all

    Read the article

  • BroadCast Messages to multiple client on the same machine

    - by Christopher Chase
    I need to have a server on one machine that broadcasts a message sever second or so, Kind of like a service discovery. The message needs to be received by multiple client processes that could be on the same machine or different machines. Im using delphi7, with indy 9.0.18 where im stuck is if i should be using UDP or IP MultiCast or if its even possible... Ive managed to get it to work with IP Multi Cast with one client per machine, but even after many trys with different bindings.. max/min ports etc, i cant seem to find a solution.

    Read the article

  • Making a DataSet from another DataSet

    - by M.H
    Hi folks I have a client-server project (small project for companies in C#) and the server has a DataSet with some tables (there is no Database for some reasons so we save the DataSet as an XML file). when the clients connect to the server, the server should send some informations to the client depends on his privileges and some clients must add to or Delete from the DataSet in the server. I am thinking in Making a new small DataSet and sending it to the client (as xml) but I don't know how to generate a new DataSet with specific tables and rows (I tried to use Linq to DataSet but nothing worked). My Questions is how can I do that and is this a good solution to send informations to clients ? can you suggest a better scenario to send data to clients(I mean instead of making a new DataSet).

    Read the article

  • Maintain set of local commits working with git-svn

    - by benizi
    I am using git to develop against a project hosted in subversion, using git-svn: git svn clone svn://project/ My general workflow has been to repeatedly edit-and-commit on the master branch, then commit to the svn repository via: git stash git svn dcommit git stash apply One of the local modifications that 'stash' command is preserving, that I don't want to commit to the svn repository, is a changed database connection string. What's the most convenient way to keep this local change without the extra 'stash' steps? I suspect that something like 'stash' or 'quilt' is what I'm looking for, but I'm still new enough to git that I think I'm missing some terminology that would lead to the exact incantation. Update: The only solution I found that seems to avoid the git stash + git-svn action + git stash apply series was to update the git-svn ref manually: (check in local-only change to 'master', then...) $ cat .git/refs/master > .git/refs/remote/git-svn $ git svn fetch (with at least one new SVN revision) And that leaves the local-only commit as a weird (probably unsafe) commit between two svn revisions.

    Read the article

  • Installing drivers from NSIS installer in x64 system.

    - by Alex Che
    I want to add support for x64 OSes to my NSIS installer. One of the installer's task is drivers installation. I've written a special NSIS plugin for this task. This plugin uses Driver Install Frameworks API (DIFxAPI) to install drivers. The problem is that this API does not work in WOW64. Is there any way to create x64 installer application with NSIS? Has anybody solved similar problem with NSIS? P.S.: The only solution I can see now is to run another application from the installer. This will be x64 executable that installs drivers. But this way seems somewhat harder to me. So, I'm interested in other solutions.

    Read the article

  • Data refresh and drill down problem with SSAS cube and excel services

    - by chaitanya
    I have a SSAS cube which I am using in an excel document, prepare a report which has drill-down etc and i am publishing it to a sharepoint site. It gets published alright but when I try to drill down it throws an error "Data Refresh failed" etc.The data source and the sharpoint site are on the same machine(running windows server 2008) and we have windows authentication running. From what I have been able to find on the internet there is a problem with passing the windows authentication credentials to the database etc.But I have not been able to find the exact way to sort out these problem. What is the solution for this????

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Win32 API Question

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior?

    Read the article

  • Get domain user from IPAddress or Hostname in .Net

    - by Duracell
    We're trying to implement a custom solution for Growl for Windows. We tell the Growl client to subscribe to notifications from our server. The server then gets a message from the client via the GNTP (Growl messaging protocol). We require the name of the user logged into the client machine in order to do database lookups. The GNTP does not provide this information. So, we have a connected client socket (and thus, IP address) and a message from this client containing its machine name. Is there any possible way to determine the username of the user who is logged into the specified machine? We have access to windows domain services.

    Read the article

  • Silverlight Project - Setting Reference to Copy Local false not working.

    - by cmaduro
    Why is it that when my Silverlight project is built, the output directory contains a bunch of culture specific directories: ar\System.Windows.Controls.resources.dll bg\System.Windows.Controls.resources.dll ca\System.Windows.Controls.resources.dll etc etc etc Also the root of the build output contains: System.Xml.Linq.dll System.windows.Controls.dll I have gone through the projects in my solution and made sure that "Copy Local" is set to false for all the referances of the mentioned dll files. Those 2 files were set to true, but I did switch them to false. Despite my effort to google an answer, I remain stuck.

    Read the article

  • Embed VLC player in GWT

    - by chrisnfoneur
    Hello, I want to embed a VLC player in my webapp build with Google's GWT. First I had a look at this page: http://wiki.videolan.org/GWT, which offers a nice solution but I add to implements all javascript functions calls (play, stop, fullscreen) with JSNI. Then I found gwt-player (hosted by Google code) which does all the job for me but the annoying part is that the project is not widely used (few posts each month on the project's group, not so many talks about it in blogs/forums...) Do you know another option to easly embed & control a VLC player in a GWT app ? My main goal is to play any video/audio file in a webapp and offer the user a fast/forward feature (set rate in VLC), is there any other player I could use ? I already had a look at Quicktime, Windows Media player & Flowplayer, none of them offers as much features as VLC. Thanks in advance & have a nice new year's eve. Chris

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • Java HTML Parsing

    - by Richie_W
    Hello everyone. I'm working on an app which scrapes data from a website and I was wondering how I should go about getting the data. Specifically I need data contained in a number of div tags which use a specific CSS class - Currently (for testing purposes) I'm just checking for "div class = "classname"" in each line of HTML - This works, but I can't help but feel there is a better solution out there. Ie. - Is there any nice way where I could give a class a line of HTML and have some nice methods like: boolean usesClass(String CSSClassname); String getText(); String getLink(); Many many thanks!

    Read the article

  • An Object reference is required for the non-static field

    - by Muhammad Akhtar
    I have make my existing method to static method to get access in javascript, like.. [WebMethod(EnableSession = true), ScriptMethod()] public static void Build(String ID) { Control releaseControl = LoadControl("~/Controls/MyControl.ascx"); //An Object reference is required for the non-static field, mthod or property // 'System.Web.UI.TemplateControl.LoadControl(string)' plc.Controls.Add(releaseControl); // where plc is place holder control //object reference is required for the nonstatic field, method, or property '_Default.pl' } When I build I am getting error and I have posted these in comments below each line before converted it to static method, it working perfectly. Please suggest me the solution of my issue. Thanks

    Read the article

  • Share on Facebook, Retweet, etc, buttons in straight Ruby gem?

    - by Alexandre
    Notice those buttons atop http://techcrunch.com/2010/04/04/he-even-makes-coldplay-sound-fun/ for sharing the url on social networks? I want to do something very very similar for a site I'm building. ShareThis offers a widget that does the same thing but it's branded and external. I'm looking for a pure Ruby solution. A gem containing a module that could be included in a Rails ApplicationHelper class would be perfect. Suggestions appreciated before I go reinventing the wheel!

    Read the article

  • How to debug a tcl script which is argument to an executable?

    - by user321047
    I have a application which takes tcl script as argument. I want to debug tcl script when the application processes it. My development environment consists of Dynamic Languages Toolkit from www.eclipse.org/dltk along with Active state remote debugger -dbgp_tcldebug. I am able to debug the individual tcl scripts with this setup. I created a tcl project in eclipse and added 'startup.tcl' and 'argumentScript.tcl' scripts and added following command to the startup script, set ExecutableName "xyz.exe" set returnValue [catch {eval exec $ExecutableName "argumentScript.tcl" } result] My debugger works fine with 'startup.tcl' script. I added the breakpoint in 'argumentScript.tcl' but it is not working. How can I debug the "argumentScript.tcl" script ? Edit: A solution without using eclipse environment is Tcl Dev Kit with remote debugging feature.

    Read the article

  • Java Web Start: unsigned cglib

    - by Pticed
    I am using hibernate on the server side with a client application started via Java Web Start. I can't sign the jars (I'd like to but I can't). I get a permission exception when I get a POJO with lazy fields. Caused by: java.security.AccessControlException: access denied (java.util.PropertyPermission cglib.debugLocation read) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPropertyAccess(Unknown Source) at java.lang.System.getProperty(Unknown Source) at net.sf.cglib.core.DebuggingClassWriter.(DebuggingClassWriter.java:35) ... 44 more How can I avoid that? I thought about setting the collection to null before returning the pojo to the client but I'd like to find a better solution.

    Read the article

  • Using a C#.net DLL in Node.js / serverside javascript

    - by Dve
    I have spent a while playing with node.js and exploring related frameworks such as express and geddy... and I am very impressed, especially with the WebSockets implementation in socket.io. I have a pet project that is an online game, the entire game engine is written in C# and I would like to know if there is anyway I can call the functions of this existing dll from a solution built using node.js, socket.io, express etc? The game engine itself is pretty complete; tested and robust. I am hoping there is some neat way of exposing its functionality without to much overhead. Thanks

    Read the article

  • In Django, what's the best way to handle optional url parameters from the template?

    - by Thierry Lam
    I have the following type of urls which are both valid: hello/ hello/1234/ My urls.py has the following: urlpatterns = patterns('hello.views', url(r'^$', 'index', name='index'), url(r'^(?P<user_id>\d+)/$', 'index', name='index'), ) In my views.py, when I pass user_id to the template, it defaults to 0 if not specified. My template looks like the following, I'm using namespace hello for my hello app: {% url hello:index user_id %} If user_id is not specified, the url defaults to hello/0/. The only way I can think of preventing the default 0 from showing in the url is by an if stmt: {% if user_id %} {% url hello:index user_id %} {% else %} {% url hello:index %} {% endif %} The above will give me hello/ if there are no user_id and hello/1234/ if it's present. Is the above solution the best way to solve this issue?

    Read the article

  • Memory management in objective-c

    - by prathumca
    I have this code in one of my classes: - (void) processArray { NSMutableArray* array = [self getArray]; . . . [array release]; array = nil; } - (NSMutableArray*) getArray { //NO 1: NSMutableArray* array = [[NSMutableArray alloc]init]; //NO 2: NSMutableArray* array = [NSMutableArray array]; . . . return array; } NO 1: I create an array and return it. In the processArray method I release it. NO 2: I get an array by simply calling array. As I'm not owner of this, I don't need to release it in the processArray method. Which is the best alternative, NO 1 or NO 2? Or is there a better solution for this?

    Read the article

  • How do I create an Access 2003 MDE programmatically or by command line in Access 2007?

    - by Ned Ryerson
    I have a legacy Access 2003 database file that must remain in that format to preserve its menus and toolbars. I have recently moved to Access 2007 in my build environment and will be deploying the compiled Access 2003 program with the Access 2007 runtime. In Access 2003, I could script the process of creating an MDE with the Access Developer Extensions (WZADE.mde) using the command line and an .xml file of build preferences (without creating an install package). The Access 2007 developer extensions do not seem to offer a similar option. I can "Package a Solution", but it creates an accdr and buries it in a CD installer. I've tried programmatic options like Docmd.RunCommand acMakeMDEFILe and Syscmd(603, mdbpath, mdepath) but they no longer work in Access 2007. Of course, i can manually create an MDE using Database ToolsCreate MDE, but that is no scriptable as far as I can tell.

    Read the article

  • Changing UIImageView animation speed

    - by Nikolas Stephan
    I am using the UIImageView to animate a bunch of images. I know that I can change the speed by altering animationDuration, but that doesn't seem to take effect until the animation is restarted. My problem is that beside not really wanting to have to restart the animation as this limits me to only being able to change the speed once per cycle, there also doesn't seem to be a way to find out what frame is currently being shown and I would therefore have to rely on a timer to "guess" which one it is. So my question is whether there is a way to change the speed without restarting the animation and if not is there some way I could avoid the aforementioned problem? I'm not too keen to write my own animation class, but may end up having to if there isn't a nicer solution.

    Read the article

  • How to sort by a field that has an alternative value if null in lucene?

    - by citizenmatt
    Hi folks. I want to sort my lucene(.net) search results by a date field (date1), but if date1 is not set, I'd like to use date2. The traditional sort method is to sort by date1, and then sort the values that are the same by date 2. This would mean that whenever I did fall back to date2, these values would be at the top (or bottom) of the result set. I'd like to interleave the date2 values with the date1 values. In other words, I want to sort on (date1 != null ? date1 : date2). Is this possible in lucene? I reckon I could do this in the index creation phase (just put the relevant date value in a new field) but I don't have enough control of the indexing process to be able to do this, so would like a sorting solution. Any ideas? Thanks Matt

    Read the article

  • Hierarchy based aggregation

    - by Ganapathy Subramaniam
    I have a hierarchy table in SQL Server 2005 which contains employees - managers - department - location - state. Sample table for hierarchy table: ID Name ParentID Type 1 PA NULL 0 (group) 2 Pittsburgh 1 1 (subgroup) 3 Accounts 2 1 4 Alex 3 2 (employee) 5 Robin 3 2 6 HR 2 1 7 Robert 6 2 Second one is fact table which contains employee salary details ID and Salary. Sample data for fact table: ID Salary 4 6000 5 5000 7 4000 Is there any good to way to display the hierarchy from hierarchy table with aggregated sum of salary based on employees. Expected result is like Name Salary PA 15000 (Pittsburgh + others(if any)) Pittusburgh 15000 (Accounts + HR) Accounts 11000 (Alex + Robin) Alex 6000 (direct values) Robin 5000 HR 4000 Robert 4000 In my production environment, hierarchy table may contain 23000+ rows and fact table may contain 300,000+ rows. So, I thought of providing any level of groupid to the query to retrieve just its children and its corresponding aggregated value. Any better solution?

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >