Microsoft 70-767 Exam Dumps 2019

Proper study guides for 70-767 Implementing a SQL Data Warehouse (beta) certified begins with preparation products which designed to deliver the by making you pass the 70-767 test at your first time. Try the free right now.

Free demo questions for Microsoft 70-767 Exam Dumps Below:

NEW QUESTION 1
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in the series.
Start of repeated scenario
Contoso. Ltd. has a Microsoft SQL Server environment that includes SQL Server Integration Services (SSIS), a data warehouse, and SQL Server Analysis Services (SSAS) Tabular and multidimensional models.
The data warehouse stores data related to your company sales, financial transactions and financial budgets All data for the data warenouse originates from the company's business financial system.
The data warehouse includes the following tables:
70-767 dumps exhibit
The company plans to use Microsoft Azure to store older records from the data warehouse. You must modify the database to enable the Stretch Database capability.
Users report that they are becoming confused about which city table to use for various queries. You plan to create a new schema named Dimension and change the name of the dbo.du_city table to Diamension.city. Data loss is not permissible, and you must not leave traces of the old table in the data warehouse.
Pal to create a measure that calculates the profit margin based on the existing measures.
You must improve performance for queries against the fact.Transaction table. You must implement appropriate indexes and enable the Stretch Database capability.
End of repeated scenario
You need to resolve the problems reported about the dia city table.
How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
70-767 dumps exhibit

    Answer:

    Explanation: 70-767 dumps exhibit

    NEW QUESTION 2
    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
    After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
    You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
    Users report performance degradation when they run the following stored procedure:
    70-767 dumps exhibit
    You need to optimize performance.
    Solution: You run the following Transact-SQL statement:
    70-767 dumps exhibit
    Does the solution meet the goal?

    • A. Yes
    • B. No

    Answer: B

    Explanation: 100 out of 500,000 rows is a too small sample size.
    References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

    NEW QUESTION 3
    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
    After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
    You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
    Users report performance degradation when they run the following stored procedure:
    70-767 dumps exhibit
    You need to optimize performance.
    Solution: You run the following Transact-SQL statement:
    70-767 dumps exhibit
    Does the solution meet the goal?

    • A. Yes
    • B. No

    Answer: A

    Explanation: You can specify the sample size as a percent. A 5% statistics sample size would be helpful.
    References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

    NEW QUESTION 4
    You need to recommend a storage solution for a data warehouse that minimizes load times. The solution must provide availability if a hard disk fails.
    Which RAID configuration should you recommend for each type of database file? To answer, drag the appropriate RAID configurations to the correct database file types. Each RAID configuration may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
    NOTE: Each correct selection is worth one point.
    70-767 dumps exhibit

      Answer:

      Explanation: Box 1: RAID 5
      RAID 5 is the similar to that of RAID 0 provided that the number of disks is the same. However, due to the fact that it is useless to read the parity data, the read speed is just (N-1) times faster but not N times as in RAID 0.
      Box 2: RAID 10
      Always place log files on RAID 1+0 (or RAID 1) disks. This provides better protection from hardware failure, and better write performance.
      Note: In general RAID 1+0 will provide better throughput for write-intensive applications. The amount of performance gained will vary based on the HW vendor’s RAID implementations. Most common alternative to RAID 1+0 is RAID 5. Generally, RAID 1+0 provides better write performance than any other RAID level providing data protection, including RAID 5.

      NEW QUESTION 5
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
      You configure a new matching policy Master Data Services (MDS) as shown in the following exhibit.
      70-767 dumps exhibit
      You review the Matching Results of the policy and find that the number of new values matches the new values.
      You verify that the data contains multiple records that have similar address values, and you expect some of the records to match.
      You need to increase the likelihood that the records will match when they have similar address values. Solution: You decrease the minimum matching score of the matching policy.
      Does this meet the goal?

      • A. Yes
      • B. NO

      Answer: A

      Explanation: We decrease the Min. matching score.
      A data matching project consists of a computer-assisted process and an interactive process. The matching project applies the matching rules in the matching policy to the data source to be assessed. This process assesses the likelihood that any two rows are matches in a matching score. Only those records with a probability of a match greater than a value set by the data steward in the matching policy will be considered a match.
      References: https://docs.microsoft.com/en-us/sql/data-quality-services/data-matching

      NEW QUESTION 6
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
      You have a Microsoft SQL server that has Data Quality Services (DQS) installed.
      You need to review the completeness and the uniqueness of the data stored in the matching policy. Solution: You create a matching rule.
      Does this meet the goal?

      • A. Yes
      • B. No

      Answer: B

      Explanation: Use a matching rule, and use completeness and uniqueness data to determine what weight to give a field in the matching process.
      If there is a high level of uniqueness in a field, using the field in a matching policy can decrease the matching results, so you may want to set the weight for that field to a relatively small value. If you have a low level of uniqueness for a column, but low completeness, you may not want to include a domain for that column.
      References:
      https://docs.microsoft.com/en-us/sql/data-quality-services/create-a-matching-policy?view=sql-server-2017

      NEW QUESTION 7
      You are designing a data transformation process using Microsoft SQL Server Integration Services (SSIS). You need to ensure that every row is compared with every other row during transformation.
      What should you configure? To answer, select the appropriate options in the answer area.
      NOTE: Each correct selection is worth one point.
      70-767 dumps exhibit

        Answer:

        Explanation: When you configure the Fuzzy Grouping transformation, you can specify the comparison algorithm that the transformation uses to compare rows in the transformation input. If you set the Exhaustive property to true, the transformation compares every row in the input to every other row in the input. This comparison algorithm may produce more accurate results, but it is likely to make the transformation perform more slowly unless the number of rows in the input is small.
        References:
        https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/fuzzy-grouping-transformati

        NEW QUESTION 8
        You manage Master Data Services (MDS). You plan to create entities and attributes and load them with the data. You also plan to match data before loading it into Data Quality Services (DQS).
        You need to recommend a solution to perform the actions.
        What should you recommend?

        • A. MDS Add-in for Microsoft Excel
        • B. MDS Configuration Manager
        • C. Data Quality Matching
        • D. MDS repository

        Answer: A

        Explanation: In the Master Data Services Add-in for Excel, matching functionality is provided by Data Quality Services (DQS). This functionality must be enabled to be used.
        70-767 dumps exhibit To enable Data Quality Services integration
        70-767 dumps exhibit Open Master Data Services Configuration Manager.
        70-767 dumps exhibit In the left pane, click Web Configuration.
        70-767 dumps exhibit On the Web Configuration page, select the website and web application.
        70-767 dumps exhibit In the Enable DQS Integration section, click Enable integration with Data Quality Services.
        70-767 dumps exhibit On the confirmation dialog box, click OK.
        References:
        https://docs.microsoft.com/en-us/sql/master-data-services/install-windows/enable-data-quality-services-integrati

        NEW QUESTION 9
        You create a Microsoft SQL Server Integration Services (SSIS) package as shown in the SSIS Package exhibit. (Click the Exhibit button.)
        70-767 dumps exhibit
        The package uses data from the Products table and the Prices table. Properties of the Prices source are shown in the OLE DB Source Editor exhibit (Click the Exhibit Button.) and the Advanced Editor for Prices exhibit (Click the Exhibit button.)
        70-767 dumps exhibit
        70-767 dumps exhibit
        You join the Products and Prices tables by using the ReferenceNr column. You need to resolve the error with the package.
        For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
        70-767 dumps exhibit

          Answer:

          Explanation: There are two important sort properties that must be set for the source or upstream transformation that supplies data to the Merge and Merge Join transformations:
          The Merge Join Transformation requires sorted data for its inputs.
          If you do not use a Sort transformation to sort the data, you must set these sort properties manually on the source or the upstream transformation.
          References:
          https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/sort-data-for-the-merge-and-

          NEW QUESTION 10
          You manage an inventory system that has a table named Products. The Products table has several hundred columns.
          You generate a report that relates two columns named ProductReference and ProductName from the Products table. The result is sorted by a column named QuantityInStock from largest to smallest.
          You need to create an index that the report can use.
          How should you complete the Transact-SQL statement? To answer, select the appropriate Transact-SQL segments in the answer area.
          70-767 dumps exhibit

            Answer:

            Explanation: 70-767 dumps exhibit

            NEW QUESTION 11
            You need to load data from a CSV file to a table.
            How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
            NOTE: Each correct selection is worth one point.
            70-767 dumps exhibit

              Answer:

              Explanation: The Merge transformation combines two sorted datasets into a single dataset. The rows from each dataset are inserted into the output based on values in their key columns.
              By including the Merge transformation in a data flow, you can merge data from two data sources, such as tables and files.
              References:
              https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/merge-transformation?view

              NEW QUESTION 12
              Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
              You have a Microsoft SQL Server data warehouse instance that supports several client applications. The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer,
              Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
              All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
              You have the following requirements:
              70-767 dumps exhibit Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
              70-767 dumps exhibit Partition the Fact.Order table and retain a total of seven years of data.
              70-767 dumps exhibit Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
              70-767 dumps exhibit Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
              70-767 dumps exhibitMaximize the performance during the data loading process for the Fact.Order partition.
              70-767 dumps exhibit Ensure that historical data remains online and available for querying.
              70-767 dumps exhibit Reduce ongoing storage costs while maintaining query performance for current data.
              You are not permitted to make changes to the client applications. You need to implement partitioning for the Fact.Ticket table.
              Which three actions should you perform in sequence? To answer, drag the appropriate actions to the correct locations. Each action may be used once, more than once or not at all. You may need to drag the split bar between panes or scroll to view content.
              NOTE: More than one combination of answer choices is correct. You will receive credit for any of the correct combinations you select.
              70-767 dumps exhibit

                Answer:

                Explanation: From scenario: - Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
                The detailed steps for the recurring partition maintenance tasks are: References:
                https://docs.microsoft.com/en-us/sql/relational-databases/tables/manage-retention-of-historical-data-in-system-v

                NEW QUESTION 13
                You have a data warehouse.
                You need to move a table named Fact.ErrorLog to a new filegroup named LowCost.
                Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
                70-767 dumps exhibit

                  Answer:

                  Explanation: Step 1: Add a filegroup named LowCost to the database. First create a new filegroup.
                  Step 2:
                  The next stage is to go to the ‘Files’ page in the same Properties window and add a file to the filegroup (a filegroup always contains one or more files)
                  Step 3:
                  To move a table to a different filegroup involves moving the table’s clustered index to the new filegroup. While this may seem strange at first this is not that surprising when you remember that the leaf level of the clustered index actually contains the table data. Moving the clustered index can be done in a single statement using the DROP_EXISTING clause as follows (using one of the AdventureWorks2008R2 tables as an example) :
                  CREATE UNIQUE CLUSTERED INDEX PK_Department_DepartmentID ON HumanResources.Department(DepartmentID)
                  WITH (DROP_EXISTING=ON,ONLINE=ON) ON SECONDARY
                  This recreates the same index but on the SECONDARY filegroup.
                  References:
                  http://www.sqlmatters.com/Articles/Moving%20a%20Table%20to%20a%20Different%20Filegroup.aspx

                  NEW QUESTION 14
                  Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
                  You have a database named DB1 that has change data capture enabled.
                  A Microsoft SQL Server Integration Services (SSIS) job runs once weekly. The job loads changes from DB1 to a data warehouse by querying the change data capture tables.
                  You remove the Integration Services job.
                  You need to stop tracking changes to the database. The solution must remove all the change data capture configurations from DB1.
                  Which stored procedure should you execute?

                  • A. catalog.deploy_project
                  • B. catalog.restore_project
                  • C. catalog.stop.operation
                  • D. sys.sp.cdc.addjob
                  • E. sys.sp.cdc.changejob
                  • F. sys.sp_cdc_disable_db
                  • G. sys.sp_cdc_enable_db
                  • H. sys.sp_cdc.stopJob

                  Answer: F

                  Explanation: sys.sp_cdc_disable_db disables change data capture for all tables in the database currently enabled. All system objects related to change data capture, such as change tables, jobs, stored procedures and functions, are dropped.
                  References:
                  https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-disable-db-transa

                  NEW QUESTION 15
                  You are a data warehouse developer.
                  You need to create a Microsoft SQL Server Integration Services (SSIS) catalog on a production SQL Server instance.
                  Which features are needed? To answer, select the appropriate options in the answer area.
                  70-767 dumps exhibit

                    Answer:

                    Explanation: Box 1: Yes
                    "Enable CLR Integration" must be selected because the catalog uses CLR stored procedures. Box 2: Yes
                    Once you have selected the "Enable CLR Integration" option, another checkbox will be enabled named
                    "Enable automatic execution of Integration Services stored procedure at SQL Server startup". Click on this check box to enable the catalog startup stored procedure to run each time the SSIS server instance is restarted.
                    70-767 dumps exhibit
                    Box 3: No References:
                    https://www.mssqltips.com/sqlservertip/4097/understanding-the-sql-server-integration-services-catalog-and-crea

                    NEW QUESTION 16
                    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
                    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
                    You have a Microsoft Azure SQL Data Warehouse instance. You run the following Transact-SQL statement:
                    70-767 dumps exhibit
                    The query fails to return results.
                    You need to determine why the query fails.
                    Solution: You run the following Transact-SQL statements:
                    70-767 dumps exhibit
                    Does the solution meet the goal?

                    • A. Yes
                    • B. No

                    Answer: B

                    Explanation: We must use Label, not QueryID in the WHERE clause. References:
                    https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec

                    NEW QUESTION 17
                    Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
                    You have a database named DB1 that has change data capture enabled.
                    A Microsoft SQL Server Integration Services (SSIS) job runs once weekly. The job loads changes from DB1 to a data warehouse by querying the change data capture tables.
                    You remove the Integration Services job.
                    You need to stop tracking changes to the database. The solution must remove all the change data capture configurations from DB1.
                    Which stored procedure should you execute?

                    • A. catalog.deploy_project
                    • B. catalog.restore_project
                    • C. catalog.stop.operation
                    • D. sys.sp.cdc.addjob
                    • E. sys.sp.cdc.changejob
                    • F. sys.sp_cdc_disable_db
                    • G. sys.sp_cdc_enable_db
                    • H. sys.sp_cdc.stopJob

                    Answer: F

                    100% Valid and Newest Version 70-767 Questions & Answers shared by Surepassexam, Get Full Dumps HERE: https://www.surepassexam.com/70-767-exam-dumps.html (New 109 Q&As)