Search Results

Search found 221 results on 9 pages for 'jeffrey chee'.

Page 3/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • Use of list-unsubscribe to improve inbox delivery

    - by Jeffrey Simon
    To overcome email being classified as spam by Gmail, Google recommends a number of steps, which we have implemented (namely SPF, DKIM, and Precedence: bulk). One additional measure they recommend at https://support.google.com/mail/bin/answer.py?hl=en&answer=81126#authentication reads as follows: Because Gmail can help users automatically unsubscribe from your email, we strongly recommend the following: Provide a 'List-Unsubscribe' header which points to an email address where the user can unsubscribe easily from future mailings (Note: This is not a substitute method for unsubscribing). Documentation for List-Unsubscribe is found at http://www.list-unsubscribe.com/. From this documentation I expect a button to be provided by a supported mail client. I have tested the 'List-Unsubscribe' header and it does not appear to provide the button. I have tested in both Gmail and OS X Mail. I tested with an http address and with both an email address and an http address. The format of the header is as follows: List-Unsubscribe: <mailto:[email protected]>, <http://domain.com/member/unsubscribe/[email protected]?id=12345N> No button appears in any test. My questions: How widely is List-Unsubscribe supported? Should a button be appearing somewhere, or does something else have to be present? I have seen a comment that even if the button is not present, services like Gmail, Yahoo, Hotmail/Windows Live would give higher regard to email having the header. Thus it might be worthwhile for this aspect alone. Please note that our standard email footer already contacts instructions and a link to allow unsubscribing from our email. Finally, is it worth while to implement this header? (That is, any downsides?)

    Read the article

  • How activity id affects calculations such as schedule % complete when using a baseline?

    - by Jeffrey McDaniel
    Fields such as schedule % complete, planned value costs, etc. that use a baseline to help determine the value depend on the activity id's to match between the baseline project and the current project. If the activity id is changed the link is broken. In the P6 power client there is an internal guid that allows you to change the activity id in either the baseline or current project and still have these values related. In the P6 Reporting Database the activity id is used as the joining characteristic between which activities are a match between a baseline project and a current project.

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • my mouse blinks!

    - by Jeffrey
    hello I'm new in linux systems and I have the same problem in every distro... the thing is that my mouse every time i move it or do anything with it blinks!!! I think it's a xorg problem or a driver problem and the only way it does not do that is when i start with the low-resolution error using a xorg.conf from another graphic chipset and i think that isn't the best choise xDD please if you can help me I have a ati rage 128, thanks a lot for your help people n.n

    Read the article

  • Nvidia Powermizer Performance Levels

    - by jeffrey
    Is there anyway to configure Nvidia Powerimizer performance levels? My current setup has 3 power levels with the lowest one being 50mhz. The problem with this it that it lags compiz when it goes to the lowest performance level 0. Minimizing, maximizing, dragging windows, etc. are all sluggish when it's at the lowest level. Once powermizer leaves level 0 everything is very smooth and runs great. Is there anyway for me to remove level 0 and just run Level the two higher levels 1/2? I don't want to complete disable powermizer, but I can't stand the lagging once powermizer goes into performance level 0. Setting the option "prefer maximum performance" fixes the problem as it disables powermizer, but the GPU is overkill at stock speeds for most desktop use @ 850mhz. intel i5 2500k asus gene-z z68 evga 560ti fpb (driver 295.40) ubuntu 12.04 LTS x64

    Read the article

  • Is there any better IDOMImplementation other than MSXML?

    - by Chau Chee Yang
    There are 3 IDOMImplementation available in Delphi: MSXML Xerces XML ADOM XML v4 MSXML is the default IDOMImplementation. My test is count the time need to load a 10MB xml file. I use a Delphi unit generated from a XSD using XML data binding to load the xml file. This unit has 3 common function: function Getmenubar(Doc: IXMLDocument): IXMLMenubarType; function Loadmenubar(const FileName: WideString): IXMLMenubarType; function Newmenubar: IXMLMenubarType; I learn from the web that some comment that MSXML's overhead is high that it doesn't perform if compare to other XML parser. However, my study shows that MSXML is the best among others. Xerces XML 2nd and ADOM XML v4 the worst: MSXML - 0.6410 seconds Xerces XML - 2.4220 seconds ADOM XML v4 - 67.50 seconds I also come across with OmniXML that claim to have much better performance compare to MSXML but I never success using it with the unit generated by XML data binding. Is there any other vendor that implement IDOMImplementation of Delphi that work much better than MSXML? I am using Delphi 2010 and Windows 7.

    Read the article

  • actionscript calling javascript with Security Exception

    - by Jeffrey Chee
    I have a swf hosted at domain A, and I have a html at domain B My swf is able to be loaded from accessing the html at domain B. However, the swf gets a SecurityError: Error #2060: Security sandbox violation: ExternalInterface caller http://domainA.com/TrialApp.swf cannot access http://DomainB.com/. The as3 is just the below: ExternalInterface.call("javascript:_invite();"); I've also loaded the crossdomain policy file from Domain B during initialization. Security.loadPolicyFile( "http://DomainB/crossdomain.xml" ); How do I go about solving this? in my html, I have allowscriptaccess='always' Thanks in Advance

    Read the article

  • unable to add objects to saved collection in GAE using JDO

    - by Jeffrey Chee
    I have a ClassA containing an ArrayList of another ClassB I can save a new instance of ClassA with ClassB instances also saved using JDO. However, When I retrieve the instance of Class A, I try to do like the below: ClassA instance = PMF.get().getPersistenceManager().GetObjectByID( someid ); instance.GetClassBArrayList().add( new ClassB(...) ); I get an Exception like the below: Uncaught exception from servlet com.google.appengine.api.datastore.DatastoreNeedIndexException: no matching index found.. So I was wondering, Is it possible to add a new item to the previously saved collection? Or was it something I missed out. Best Regards

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Looking for solution to persist rows sequence in a database table that allow efficient reordering at

    - by Chau Chee Yang
    I have a database table. There is a field name "Sequence" that indicate sequence when user presents the rows in report or grid visually. When the rows are retrieved and presented in a grid, there are few UI gadget that allow user to reorder the rows arbitrary. The new sequence will be persist in database table when user commit the changes. Here are some UI operations: Move to first Move to last Move up 1 row Move down 1 row multi-select rows and move up or down multi-select rows and drag to new position For operation like "Move to first" or "Move to Last", it usually involve many rows and the sequence those rows would need to be updated accordingly. This may not be efficient enough at runtime. It is a common practice to use INTEGER as sequence's data type. Other solution is using "DOUBLE" or "FLOAT" that could overcome the mass update of row sequence but we will face problem if we reach the limit of precision.

    Read the article

  • compilation of image stitching code in matlab

    - by chee
    i am facing lots of problems while running code for image stitching given at this link http://se.cs.ait.ac.th/cvwiki/matlab:tutorial:image_stitching_from_high-view_images_using_homography may i get help regarding this type of problems here. EDIT: Image stitching code fails with the following message: ??? Undefined function or variable 'x2'. Error in ==compute_direct_homography at 26 amplified_x2=x2.*repmat([diagonal_ratio(x1,x2) diagonal_ratio(x1,x2) 1]',1,size(x2,2)); %assumption 1degree of lat and long =110,000 meters refer wiki Error in == project at 3 compute_direct_homography;

    Read the article

  • unable to add objects to saved collection in GAE

    - by Jeffrey Chee
    I have a ClassA containing an ArrayList of another ClassB I can save a new instance of ClassA with ClassB instances also saved using JDO. However, When I retrieve the instance of Class A, I try to do like the below: ClassA instance = PMF.get().getPersistenceManager().GetObjectByID( someid ); instance.GetClassBArrayList().add( new ClassB(...) ); I get an Exception like the below: Uncaught exception from servlet com.google.appengine.api.datastore.DatastoreNeedIndexException: no matching index found.. So I was wondering, Is it possible to add a new item to the previously saved collection? Or was it something I missed out. Best Regards

    Read the article

  • Single Loader to multiple Sprite Possible?

    - by Jeffrey Chee
    I've looked in various resources regarding this topic, and it seems to me that I need a Loader for every Sprite which contains an image file (png). I'm trying to make a Tile Rendering System, and have created a grid of X by Y sprites, but all of them actually reference the same image file. Is there any other way to do this? (Make the sprite share the same png data file) Some sample code of what I have done. // Create an array of X * Y Loaders var cTileLoaders:Array = new Array( 100 ); // for example 10 by 10 grid var cTiles:Array = new Array( 100 ); var nIndex:int = 0; var nImgLoadCount:int = 0; for ( ; 100 > nIndex; ++nIndex ) { cTileLoaders[ nIndex ] = new Loader(); cTiles[ nIndex ] = new Sprite(); // perform more sprite initialization .... cTileLoaders[ nIndex ].contentLoaderInfo.addEventListener( Event.COMPLETE, ImageLoaded cTileLoaders[ nIndex ].Load( new URLRequest( "some image path" ) ); } // handler for image loaded function ImageLoaded( eEvent:Event ):void { ++nImgLoadCount; // when all 100 sprite data are loaded // assuming there is no i/o error if ( 100 == nImgLoadCount ) { cTiles[ nIndex ].addChild( cTileLoaders[ nIndex ].content ); } }

    Read the article

  • Sonicwall SSL VPN: Unable to reconnect once connection drops

    - by Jeffrey Hantin
    One of my users is having problems with his NetExtender connection. After installing NetExtender from the portal, it connects fine -- ONCE. After that, attempting to reconnect gives Verifying user...authentication fail! and the log on the router shows: [timestamp] | Info | SSLVPN | Auth Failed: No user name in http request (message id: 1079) This seems odd to me because the user name, password and domain are entered on the NetExtender client. After this error occurs, the only way to connect again is to uninstall, reboot, and reinstall NetExtender. He can connect fine to the Sonicwall SSLVPN demo site, and a different user can connect fine to this site from a different PC. Any clues?

    Read the article

  • Why do I get error 0x0070004 when trying to update to Windows 8.1 from Windows 8?

    - by Jeffrey Lin
    So, I'm trying to update Windows 8 to Windows 8.1 via the Windows Store, but every time I attempt to, the update downloads properly, but then I get the error: Windows 8.1 This app wasn't installed - view details When I click on it, it says: Something happened and the Windows 8.1 could not be installed. Please try again. Error code: 0x80070004 Try again Cancel Install What does this mean? A quick Google search yields nothing. I have tried rebooting, clearing the store cache, and resetting Windows Update. A quick chkdsk scan shows no errors. A SFC scan shows that there are many issues. http://pastebin.com/TZiH8ZXZ Could this be the issue? I found the error log! http://pastebin.com/BXZEsejm Why is the registry corrupt?

    Read the article

  • IIS 7.5 401.3 Access Denied

    - by Jeffrey
    I am having this weird issue with IIS 7.5 on Windows 2008 R2 x64. I created a site in IIS and manually created a test file index.html and everything worked. When I try to do a deployment, I copy all the files from my local PC to the IIS server, try to access index.html (this is the proper deployed file) and getting 401.3 access denied error. I then try to manually recreate index.html and copy content into this newly created file and the page is accessible again... I just can't figure this out. So the issue is that IIS 7.5 can't server files that have been copied from other PCs. I tried to reset/apply permission settings to the copied folders/files but nothing has worked. Please help. Thanks! By the way, the files that I copied are just some html cutups i.e. generic html, css and image files, nothing special.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >