Releases: DataLinkDC/dinky
Dinky v1.2.5
Fix
Ignore tenant conditions when refreshing a job
Fix null key returned for cache operation when adding database
Downgrade Guava to 31.1-jre to fix CDC pipeline
Fix issue Failed to recyle database:database in auto-commit mode
Fix role menu update issue
Exception after submitting CDCSOURCE job to sync data from tables whose names start with digit(s)
Add end time for jar sql
Optimization
Dockerfile retains flink-table-planner-loader.jar
Add an option in Flink on Kubernetes to control whether to include an OwnerReference
Unified definition of Hadoop version
Replace chinese punctuation with english punctuation marks and fix typo
Contributors
@aiwenmo
@cantiandashu
@gaoyan1998
@gary-connear
@leeoo
@liguifa
@safishhh
@zackyoungh
Dinky v1.2.4
Feature
- Support flink table planner loader
Fix
- Fix the menu of subtype on document module does not refresh with document type
- Fix the sink.db in cdcsource cannot be empty when enabled auto create table
- Modify the k8s deploy command
- Fix jar sqlstatement split
- Fix dinky-release-1.20-1.2.3.jar lacks dinky-cdc-plus.jar
- Fix the problem of task positioning error
- Fix A bug occurred when other sending methods were switched to corporate WeChat
- Fix cicd k3d bug
- Fix k8s operator submit bug
- Fix the token cannot be refreshed properly
Optimization
- CatalogueService add a new method findByTaskId
Contributors
@aiwenmo
@gaoyan1998
@javaht
@jiangwwwei
@renyansongno1
@suxinshuo
Dinky v1.2.3
Feature
- Support recover from the latest completed CheckPoint
Fix
- Fix the flinksql history version switch button does not take effect
- Fix the issue of the page constantly refreshing
- Fix the issue where two Flink configuration items with the same prefix cause errors in Flink configuration parsing
- Fix the issue where the form is cleared on the frontend when creating a new task
- Sync the helm configuration with src/main/resources directory
- Fix the issue where two Flink configuration items with the same prefix cause errors in Flink configuration parsing
- Fix Array type in paimon query did not show the value
- Fix metadata-hive MissingFormatArgumentException
- Fix import errors in utility classes and Fix configuration issues related to frontend display
- Fix comparison expression issue
- Fix Paimon tinyint data can not preview in data source center
- Fix an error in the comparison of Flink JAR task historical caches
- Fix the WebSocket reconnection causes the page to flicker
- Fix the initialization request on the login page is loaded multiple times
Optimize
- Check if the cluster is disabled before starting the flink session
- Optimize WebSocket for asynchronous sending
- The WebSocket reconnection causes the page to flicker
Contributors
Dinky v1.2.2
Feature
- Support data studio catalog tree list scroll interaction and search
Fix
- Fix the bug in mounting the log configuration file in Flink
- Fix unable to execute statements such as create database
- Fix for pipeline.jars configuration during task submit
- Fix repeat job submission when switch tab in the detail page
- Fix projects having same name but different parents cannot be created
- Fix the error reporting of non-existence when parsing global variables
- Fix the failure of task startup due to empty configuration
- Fixed the issue that the historical version was not refreshed after the task was pushed
Optimize
- Register center document modal display optimization when enable English
- Replace NPM, switch to PNPM
- Optimization udf saved placeholder
- Optimize the architecture of websocket to work with Spring event
- Optimization catalog table info
- Display the timestamp type field as a string value when previewing data
- Compatible with kubernetes.container.image and kubernetes.container.image.ref
- Optimize web package.json content
- Optimize the internationalization of prompt messages
- Optimize the verification of the availability of cluster configuration before job execution
Contributors
@aiwenmo
@Jam804
@jiangwwwei
@MactavishCui
@zackyoungh
@Zzm0809
Dinky v1.2.1
Feature
- Support CALL statement
- Flink Kubernetes opoerator supports ingress
Fix
- Fix FLINK JAR submission
- Fix deserialization exceptions caused by incorrect get methods in enum fields
- Change k8s StringUtils import
- Fix FlinkJar args global variable parsing
- Fix variable can not throw error
- Fix for debug task when target table contains '.'
- Fix the lineage of task with variable
- Fix postgresql when using concat
- Fix for task lock strategy condition
- Fix FlinkJar task lost info
- Fix click to trigger savepoint error
- Fix for reading 'root-exception'
- Fix the errors when querying the data of numeric and date types in Paimon
- Fix CALL statement can not be executed on standalone cluster
- Fix set statement is not effective in application mode
- Fix for web socket session do not closed correctly
Optimize
- Optimize the execution logic of the script
- Optimize dolphin push info when enable English
- Optimize docker image build
- Optimization home page icon and job detail lineage under dark theme
- Optimize lineage relationship chart display
- Optimize e2e test
- Add source url param in CDCSOURCE
- The user-defined flink conf path overrides the flink conf path parameter
Document
- Fix normal deploy doc error
- Quick experience doc update
Contributors
@aiwenmo
@gaoyan1998
@Jam804
@liguifa
@MactavishCui
@zackyoungh
@Zzm0809
Dinky v1.2.0
Feature
- added npm profiles
- Add bug template version number
- Add built-in Flink History Server to reduce the state of the Unknown, the final information of the Flink task is more accurate
- Added support for Flink 1.20 and updated dependencies on other Flink versions
- Support task export
- Add global token
- Support resource of physical deletion
- Support paimon hdfs hive datasource
- Obtain job information using the ingress address
- Flink SQL task Support insert result preview
- Add welcome init page
- FlinkSQL Studio supports real-time update task status
- Flink jar add form
- Provide init tools
- Support pgsql flink catalog
- Add E2E Test Programing process
Fix
- Fix the issue of error when executing show statement
- Fix Json serialization and deserialization
- Fix flink 1.19 cli bug
- Fix bug "all ip port is not available"
- Fix the issue where the enable button in Git Project forms does not have a default value
- Fix the issue with the saveOrUpdate method in the git project module
- Fix SavePoint path logic and adjust the configuration method of Flink configuration acquisition
- Resolve the issue of "Exceeding storage quota" when too many job tabs are open
- Fix git build some bug
- Fix the issue of unsupported global variable substitution when fetching field lineage
- Fix thumbnail display in code editor
- Fix the SQL auto initialization issue of PG
- Fix oracle column type convert error
- Fix some bugs that occurred when Flink was submitted in local mode
- Fix the issue where flyway does not support mysql5.7
- Fix abnormal data in pg query
- Fix exception caused by no instance when clicking on a job on the optimization workbench
- Fix execute failed
- Fix query oracle primary key column bug
- Fix task tree can not sorting
- Fix the issue of unlimited refresh of Git project pages
- Fix null pointer exception occurs when dinky configures DingTalk alarm
- Fix some minor bugs
- Fix the issue of array out of bounds when fetching lineage information
- Fix SQL injection error caused by upgrading Druid version
- Fix the catalog display field bug
- Fix execute jar submit in yarn-application
- Fix null pointer exception occurs in alert
- Fix the issue where the table name has a middle line that prevents task execution
- Fix configuration key error
- Fix job alert dinky address url
- Fix menu mapper
- Fix job id is null exception in query model
- Fix Keberos bug related repair, SQL SET value does not take effect, etc
- Fix do not save job instance in query mode
- Fix ws bug
- Fix web package
- Fix Dinky backend CI workflow with Flink 1.20
- Fix the issue of primary key generation strategy
- Fix the issue of Object not found when mocking statement
- Fix datastudio footer state
- Fix the issue of incomplete dependencies in the docs module
- Close the data development page, floating button
- Fix the data development page and enable system configuration
- Fix k8s form ingresss bug
- Fix the route redirection error on the welcome page
- Fix the Flink task to submit the session mode
- Fix web npe
- Fix web clear bug
- Fixed an error when using the copy button in the Resource Center
- Issue with creating a new task with a subdirectory of the same name
- Restrictions on task names when running in Kubernetes mode
- Fix k8s test bug
- Fix data development and introduce LESS to cause the global CSS style confusion
- Fix data development, Flink jar task toolbar display
- Fix pg bug
- Fix dolphinscheduler calls dinky tasks and concurrent execution exceptions
- Fix Yarn webui fails to obtain task status when submitting a Flink task after turning on Kerberos authentication
- Fix the issue where the submitted job name remains unchanged when renaming the job
- Fix alert serializable
- Fix login bug
- Fix flink jar submit
- Fix automation script path issue
- Fix git code builds error
- Fix yarn parallel submit
- Fix NPE when executing a query statement on the PG table
- Fix the issue where FlinkJar cannot use global variables
Optimize
- Optimize version update logic to solve cache issues caused by upgrades
- Optimize the worker's place page display
- Refactor metric request
- Refactor the method of obtaining user.dir
- SSE switch to global websocket and web container switches from Tomcat to Undertow
- Add getSchemas and getTables api
- Delete dinky_cluster index
- Optimize mapper queries
- Optimizing class attribute type issues
- Delete the prompt message on the UDF registration management page
- Optimize some web layouts to make them more user-friendly when displayed on small screens
- Optimizing the virtual scrolling problem of data source detail list
- Optimize login page
- Optimize doc action
- Upgrade doc some deps
- Improve get table info of the schema
- Optimize program start
- Optimize cluster configuration and start session cluster for manual registration
- Introduction and layout of configuration items in the optimization configuration center
- Tips for optimizing role permissions
- Obtain bloodline to increase loading effect
- Try to achieve unified JSON(jackson) serialization as much as possible
- Add hints: role and tenant are bound
- Optimize some page layouts, update web dependencies, and fix some bugs
- Modify and upgrade SQL file version number
- Optimize the display of Flink operator diagram in the operation and maintenance center
- Optimize dinky flink web UI
- oracle timestamp column type order is changed to precede time column
- Optimize task list layout
- Optimize some code
- Add repeat import task
- Limit the maximum percentage of container memory used by the JVM via -XX:MaxRAMPercentage
- Optimized the printing of K8S logs
- Optimize flink application mode status refresh
- Refactoring a new data development interface to enhance the user experience
- Remove the restriction on underscores in job names
- Change token key name
- Remove quotation marks when building FlinkSQL
- Upgrade cdc to 3.2.0
- Add package-lock.json
- Refactor get version function
- Add tag right-click function
- Optimized the new UI
- Optimize debug task to preview data
- Optimize FlinkDDL execution sequence
- Remove the old version of the data development page and fix some minor details
- Uniformly use '/' as the file separator
- Optimize explain and add test
- Move DataStudioNew to DataStudio
- Refactor result query
- Add websocket PING PONG
- Add footer last update time
- Optimize the style of IDE
- Remove old lineage
- Optimize datastudio theme
- Optimize CDCSOURCE and support sink print and mock
- Optimized offline button icon
- Optimize web icon
- Improve print table data display method
- Optimize the status of runing task and beautify UI
- Optimize the logic for constructing role menus
- Improved the missing exception message when uploading files in the Resource Center
- Optimize submit task print error log
- Click the Tasks tab to switch to Service Synchronization
- Delete the previously failed cluster when resubmitting the task
- Optimize flink jar form select
- Optimize app package size
- Variable suggestion optimization
- Add Deployment status monitoring
- Add resource management to the datastudio page
- Optimize some script
- Add default jobmanager.memory.process.size parameter
- Optimization scheduler request error assert method
- Refactor udf execute
- Optimize blood relationship acquisition, add Savepoint, optimize udf class name display
- Optimization devops page ui
- Modify sqllite data position
- Change Chinese comments to English comments
- Add welcome page auto width
- Add push task into DolphinScheduler
Document
- In k8s mode, submit tasks and refine documents
- Add Datasophon integration with Dinky
- Add Flink Cli Doc
- Update ICP in document
- Update deploy guide reference
- Fix deploy doc
- Doc of debug data preview update
- Update images in Quick Start documentation
- Optimize the status of runing task and beautify UI
- Optimize Dinky without Flink dependency, unable to start
- Optimized the package size of the App and the rs protocol
- Modify the wrong links in README.md and README_zh_CN.md regarding source code deployment on how to deploy.
Contributors
@aiwenmo
@binggana
@chenhaipeng
@dagenjun
@emmanuel-ferdman
@gaoyan1998
@gphwxhq
@hashmapybx
@Jam804
@javaht
@jianjun159
@leechor
@MactavishCui
@maikouliujian
@MaoMiMao
@miaoze8
@RainHXXXX
@stevenkitter
@soulmz
@suger-bl
@suxinshuo
@yuxiqian
@zackyoungh
@zhuangchong
@zhuxt2015
@Zzm0809
@18216499322
v1.1.0
Dinky-1.1.0 Release Note
incompatible changes
-
v1.1.0 supports the automatic schema upgrade framework (Flyway), using the table structure/data up to v1.0.2 as the default base version. If your version is not at v1.0.2+, you must first upgrade to the table structure of v1.0.2 according to the official upgrade tutorial. If your version is v1.0.2+, you can directly upgrade, and the program will automatically execute without affecting historical data. If you are deploying from scratch, please ignore this matter.
-
Due to the contribution of flink-cdc to the Apache Foundation, the package name of the new version will change, and it is not possible to make compatibility changes. In versions dinky-v1.1.0 and above, dinky will use new package name dependencies, which requires your flink-cdc dependencies to be upgraded to flink-cdc v3.1+, otherwise it will not work.
-
Remove the distinction of Scala version when packaging, only develop with Scala-2.12, and no longer support Scala-1.11.x.
New Features
- Added Flyway schema upgrade framework.
- Task directory supports flexible sorting.
- Implemented task-level permission control and supports different permission control strategies.
- Optimized the automatic addition of administrator user association when adding tenants.
- Added the function to directly kill the process in the case of task submission deadlock.
- Support k8s deployment of dinky.
- Implement data preview.
- New support for UDF injection configuration in data development.
- Added whole library synchronization function (cdcsource) sink end table name mapping, regular matching modification mapping.
- Added Dashboard page.
- Added Paimon data source type.
- Added SQL-Cli.
Fixes
- Modified the issue with k8s's account.name value and added the problem of Conf initialization when deleting a cluster.
- Fixed the issue of flink-cdc losing SQL in application mode.
- Fixed the issue where the task creation time was not reset when copying tasks.
- Fixed the task list positioning problem.
- Solved the problem of user-defined classes in user Jars not being compiled when submitting Jar tasks.
- Fixed the incorrect alarm information in the enterprise WeChat-app mode.
- Fixed the problem of flink-1.19 not being able to submit tasks.
- Fixed the startup script not supporting jdk11.
- Fixed the problem of cluster instances not being deleted.
- Fixed the problem of UDF not finding the class in Flink SQL tasks.
- Fixed the problem of the data development page not updating the state when the size changes.
- Fixed the problem of not being able to get the latest high availability address defined in custom configuration.
- Fixed the problem of not recognizing the manual configuration of rest.address and rest.port.
Optimizations
- Optimized the prompt words in resource configuration.
- Optimized the DDL generation logic of the MySQL data source type.
- Optimized some front-end dependencies and front-end prompt information.
- Optimized the copy path function of the resource center, supporting multiple application scenarios within dinky.
- Optimized the monitoring function, using the monitoring function switch in dinky's configuration center to control all monitoring within dinky.
- Optimized some front-end judgment logic.
Restructuring
- Moved the alarm rules to the alarm route under the registration center.
- Removed paimon as the storage monitoring medium, changed to sqllite, and do not strongly depend on the hadoop-uber package (except in the Hadoop environment), and support periodic cleaning.
- Restructured the monitoring page, removing some built-in service monitoring.
Documentation
- Added documentation for deploying dinky on k8s.
- Optimized the Docker deployment documentation.
- Added documentation related to whole library synchronization function (cdcsource) sink end table name mapping.
v1.0.3
Dinky-1.0.3 Release Note
Upgrade Instructions
1.0.3 is a bug fix version, no table structure changes, no additional SQL scripts need to be executed during upgrade, just overwrite and install, pay attention to the modification of configuration files and the placement of dependencies
About SCALA version: The release uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself, please refer to Compile and Deploy, change the profile scala-2.12 to scala-2.11
New Features
- Added the function of manually killing the process after the task is stuck during operation
Fixes
- Fix the problem that the Yarn Application mode cannot execute tasks in Flink 1.19 support
- Fix the problem of start and stop scripts, adapt to the GC parameters of jdk 11
- Fix UDF The class cannot be found after publishing
- Fixed the priority problem that the set function cannot cover in the Application task SQL
Optimization
- Optimize the Dinky service to cause the CPU load to be too high and the thread not to be released during monitoring
- Optimize the Dinky monitoring configuration, according to the
Configuration Center->Global Configuration->Metrics Configuration->**Dinky JVM Monitor Switch**switch to control whether to enable Flink task monitoring - Optimize the data type conversion logic of Oracle whole database synchronization
- Optimize the front-end rendering performance and display effect of monitoring data
v1.0.2
Dinky-1.0.2 Release Note
Upgrade Instructions
- 1.0.2 is a BUG repair version with table structure/data changes, please execute DINKY_HOME/sql/upgrade/1.0.2_schema/data source type/dinky_dml.sql
About SCALA version: The release version uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself.
Please refer to Compile Deployment and change the scala-2.12 in the profile. for scala-2.11
New Feature
- Adapt to various Rest SvcTypes in KubernetsApplicationOperator mode and modify JobId to obtain judgment logic
- Added SSE heartbeat mechanism
- Added the function of automatically retrieving the latest highly available JobManager address (currently implemented in Yarn; not yet implemented in K8s)
- Added the function of clearing logs in the console during data development
- Support Flink1.19
- Add task group related configuration when pushing to Apache DolphinScheduler
- Added a user designated by the user to submit YarnApplication tasks
- The startup script adds GC related startup parameters and supports configuring the DINKY_HOME environment variable
- Implement FlinkSQL configuration item in cluster configuration to support RS protocol (Yarn mode only)
Fix
- Fixed the problem of global variables not being recognized in YarnApplication mode, and reconstructed the YarnApplication submission method
- Fixed the problem of data source heartbeat detection feedback error
- Fix the possible 404 issue in front-end route jump
- Fixed the issue of incorrect error prompt when global variable does not exist
- Fixed the issue of cursor movement and flickering in the editor during front-end data development
- Fixed the path error problem in the docker file of DockerfileDinkyFlink
- Fixed the problem of unrecognized configuration Python options
- Fixed null pointer exception in role user list
- Fixed some issues when submitting K8s tasks
- Fixed Oracle's Time type conversion problem when synchronizing the entire database
- Fixed the problem that k8s pod template cannot be parsed correctly
- Fixed the issue where SPI failed to load CodeGeneratorImpl
- Fixed an issue where numeric columns declared with UNSIGNED / ZEROFILL keywords would cause parsing mismatches
- Fixed the issue where the status of batch tasks is still unknown after completion
- Fixed some unsafe interfaces that can be accessed without login authentication
- Fixed the problem of unknown status in Pre-Job mode
- Fixed the problem of retrieving multiple job instances due to duplicate Jid
- Fixed the problem that the user list cannot be searched using worknum
- Fixed the problem that the query data button on the right side of the result Tag page cannot be correctly rendered when querying data.
- Fixed issues with print table syntax
- Fixed the problem that the resource list cannot be refreshed after adding or modifying it
- Fixed the issue of incorrect console rolling update task status for data development
- Fixed the problem of occasional packaging failure
- Fixed problems when building Git projects
Optimization
- Optimize start and stop scripts
- Optimize the problem of partial overflow of the global configuration page
- Tips for optimizing UDF management
- Optimize the user experience of the operation and maintenance center list page and support sorting by time
- Optimize the warehouse address of default data in Git projects
- Optimize flink jar task submission to support batch tasks
- Optimize the problem that the right-click menu cannot be clicked when it overflows the visual area.
- Optimize the primary key key of the list component of the operation and maintenance center
- When modifying tasks, the modifiable template is optimized to an unmodifiable template.
- Optimize the display method and type of cluster configuration
- Optimize the logic of deleting clusters in K8s mode
- Fixed the problem that the cluster is not automatically released in Application mode
- Remove the logic of using Paimon for data source caching and change it to the default memory cache, which can be configured as redis cache
- Removed the automatic pop-up of Console when switching tasks
- Optimize the rendering logic of resource management. The resource management function cannot be used when resources are not turned on.
- Optimize the detection logic of login status
- Optimize login page feedback prompts
- Removed some useless code on the front end
- Optimize the problem that when the entire library is synchronized to build operator graphs multiple times, the order of the operator graphs is inconsistent, resulting in the inability to recover from the savepoint.
- Some tips for optimizing resource allocation
- Optimize and improve the replication function of the resource center, supporting all reference scenarios within Dinky currently
Safety
- Exclude some high-risk jmx exposed endpoints
Document
- Optimize expression variable expansion documentation
- Optimize some practical documents for synchronization of the entire database
- Add JDBC FAQ about tinyint type
- Added a carousel image on the home page of the official document website
- Fixed the description problem of resource configuration in document global configuration
- Added documents related to environment configuration in global configuration
- Delete some configuration items of Flink configuration in the global configuration
- Added document configuration description for alarm type email
v1.0.1
Dinky-1.0.1 Release Note
1.0.1 is a BUG repair version, no database upgrade changes, can be directly upgraded
About SCALA version: The release version uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself.
Please refer to Compile Deployment and change the scala-2.12 in the profile. for scala-2.11
New Feature
- Add some Flink Options classes to trigger shortcut prompts
- Implement automatic scrolling of console logs during data development
Fix
- Fixed the problem that the SMS alarm plug-in was not packaged
- Fixed NPE exception and some other issues when creating UDF
- Fixed job type rendering exception when creating tasks
- Fixed the issue of page crash when viewing Catalog during data development
- Fixed parameter configuration problem when using
add jarwith s3 - Fix some issues with
rsvoluntary agreement - Fixed the routing error jump problem in the quick navigation in data development
- Fixed the issue that the console was not closed when selecting UDF task type
- Fixed the issue where the
decimaldata type exceeds 38 digits (more than 38 digits will be converted to string) - Fixed the problem that some pop-up boxes could not be closed
- Fixed the problem that global variables cannot be recognized in application mode
- Fixed the problem of array out-of-bounds when obtaining container in application mode
- Fix the problem that
add filecannot be parsed
Optimization
- Optimize some front-end request URLs into agreed constants
- Optimize the startup script and remove the FLINK_HOME environment variable loading
- Optimize the prompt message when the password is incorrect
- Optimize tag display of data development tasks
- Turn off automatic preview in the data development editor
- Optimize the expression variable definition method, changing from file definition to system configuration definition
- Optimize the prompt message that query statements are not supported in application mode
- Optimize the rendering effect of
FlinkSQL environmentlist - Optimize the environment check exception prompt when building GIT projects
- Optimize the cluster for NPE issues that may occur during heartbeat detection
Document
- Added built-in variable documents for synchronization of the entire library
- Optimize document version
- Add
EXECUTE JARtask DEMO - Optimize some copywriting tips when creating cluster configurations
- Optimize some paths in the entire database synchronization document